Positive

Andrei Alexandrescu SeeWebsiteForEmail at erdani.org
Sun Oct 5 21:00:59 PDT 2008


Bruce Adams wrote:
> On Mon, 06 Oct 2008 00:55:33 +0100, Andrei Alexandrescu 
> <SeeWebsiteForEmail at erdani.org> wrote:
>>>>
>>>> Andrei
>>>  I disagree. I'm not saying its easy but it could be done. We would 
>>> have to start
>>> with something relatively simple and work our way up but it could be 
>>> done.
>>
>> I, too, think it can be done in the same way supersonic mass
>> transportation can be done, but having done research in the area I can
>> tell you you are grossly underestimating the difficulties. It is a
>> project of gargantuan size. Today the most advanced systems only managed
>> to automatically prove facts that look rather trivial to the casual
>> reader. There is absolutely no hope for D to embark on this.
>>
> I'm not asking for a generalised theorem prover. Something to handle 
> even the
> simple cases is a start.
> I agree that D won't have this (any time soon) but mainly because there
> are several hundred things higher up the priority list.
> 
>>> Compiler's already do all kinds of clever analyses behind the scenes 
>>> but each one
>>> is often hard coded. I suspect the main difficulty is giving users 
>>> too much rope
>>> by which to hang themselves, or rather hang the compiler trying to 
>>> prove something
>>> it doesn't realise it can't. Marrying declarative / constraint based 
>>> programming
>>> at compile time is creeping in via templates. I wish it was less well 
>>> hidden.
>>
>> I discussed the problem this morning with Walter and he also started
>> rather cocky: if you assert something early on, you can from then on
>> assume the assertion is true (assuming no assignment took place, which
>> is not hard if you have CFA in place). He got an arrow in a molar with
>> the following example:
>>
>> double[] vec;
>> foreach (e; vec) assert(e >= 0);
>> // now we know vec is all nonnegatives
>> normalize(vec);
>>
>> The definition of normalize is:
>>
>> void normalize(double[] vec)
>> {
>>      foreach (e; vec) assert(e >= 0);
>>      auto sum = reduce@"a + b"(vec, 0);
>>      assert(sum > 0);
>>      foreach (ref e; vec) e /= sum;
>> }
>>
>> If normalize takes udouble[] and you have one of those, there's no 
>> need to recheck. Automated elimination of the checking loop above is 
>> really hard.
>>
>>
>> Andrei
> 
> Is it? I think your example needs to be expanded or we may be talking
> at cross purposes.
> Firstly I would rearrange things a little, though in principle it makes
> no difference.
> 
>  pre
>  {
>     static assert(foreach (e; vec) assert(e >= 0));
>  }
>  void normalize(double[] vec)
>  {
>     auto sum = reduce@"a + b"(vec, 0);
>     assert(sum > 0);
>     foreach (ref e; vec) e /= sum;
>  }
> 
> double[] vec;
> static assert(foreach (e; vec) assert(e >= 0));  // line X
> 
> // now we know vec is all nonnegatives
> normalize(vec);   // line Y
> 
> Imagine I have a prolog style symbolic unification engine to hand inside 
> my compiler.
> 
> At line X the static assertion is evaluated.
> The logical property e>=0 is asserted on the vec symbol.
> The compiler reaches line Y.
> Vec has not been modified so still has this property associated with it.
> We now unify the symbol representing vec with the contract on normalise.
> The unification succeeds and everything is fine.
> 
> Now imagine we have a stupid analyser.
> 
> double[] vec;
> static assert(foreach (e; vec) assert(e >= 0));  // line X
> 
> vec[0] -= 1; // line Z
> 
> // now we know vec is all nonnegatives
> normalize(vec);   // line Y
> 
> when the compile time analyser reaches line Z it can't work out whether 
> or not the
> contract still applies, so it removes the assertion that vec is e>=0 for 
> all elements, or
> rather asserts that it is not provably true.
> Now when we reach Y the unification fails.
> We don't throw a compile time contraint violation error. We haven't 
> proved it to have failed.
> We throw a compile time constraint unprovable error or warning.
> 
> Its like a lint warning. Your code may not be wrong but you might want 
> to consider altering it
> in a way that makes it provably correct.
> 
> We would really like to be able to assert that certain properties always 
> hold for a variable
> through its life-time. That is a harder problem.
> 
> I see something like this as a basis on which more advanced analysis can 
> be built gradually.
> 
> Actually re-using static assert above was probably misleading. It would 
> be better to have something
> that says talk to the compile time theorem prover (a prolog interpreter 
> would do).
> 
> I have been meaning to write something along these lines for years but 
> so far I haven't got around to it
> so I might as well stop keeping it under my hat.
> 
> Regards,
> 
> Bruce.

I think it's terrific to try your hand at it. I for one know am not
equipped for such a pursuit. I have no idea how to perform unification
over loops, particularly when they may have myriads of little
differences while still having the same semantics; how to do that
cross-procedurally in reasonable time; how to take control flow
sensitivity into account; and why exactly people who've worked hard at
it haven't gotten very far. But I am equipped to define a type that
enforces all that simply and clearly, today.


Andrei




More information about the Digitalmars-d mailing list