Unofficial wish list status.(Jul 2008)

Fawzi Mohamed fmohamed at mac.com
Wed Jul 23 05:41:35 PDT 2008


On 2008-07-23 02:40:55 +0200, Sean Kelly <sean at invisibleduck.org> said:

> Walter Bright wrote:
>> Sean Kelly wrote:
>>> I personally feel that requiring that parameters be const is sufficient 
>>> for memoizing results.  Requiring invariance assumes a programming 
>>> model that I simply don't endorse.  But I'll grant that it's certainly 
>>> the safer approach.
>> 
>> What such a an approach would rely on would be the *convention* that 
>> nobody would change the referenced const data from one invocation of 
>> the function to the next. This contradicts the whole premise, and is no 
>> better than C++.
> 
> Oops, you're right.  I was thinking simply of the data changing while 
> the function was executing.  I suppose requiring invariance of 
> parameters does make sense.
> 
> 
> Sean

I think that the const/invariant separation makes much sense, and the 
whole framework is sound:

const: this constant *now* and should not be changed by you (but might 
be changed by others)

invariant: this won't be changed by anybody (always constant), you can 
cache it, do whatever you want to it, expect it to remain the same also 
in the future

>From another post of Walter:
> 4. Invariant data can be placed in hardware protected readonly memory, 
> such as ROM.

I thin the invariant should not necessarily allow that.

Let's call invariantW the invariant that can be placed in readonly memory.

I introduce a weaker form of invariant that I call invariantF.
For invariant F the bits that a function can see are constant (as with 
invariantW), but the bits might not be actually there in the hardware 
from the beginning.
This allows lazy evaluation, and single value dataflow variables.

Lazy evaluation means that part of the structure is calculated "on 
demand" by a pure function, at the logical level the result of it is 
known from the beginning and constant, but the bits are not there yet, 
in case of race conditions the bits can be calculated twice, but the 
result would still be the same, so it is just an efficiency problem 
that does not change any result.

In dataflow variables the result is calculated by someone else, and if 
you request it you wait until it is there. Also in this case the value 
should be unique, actually one might allow the result to be set more 
than once, but only if it is set to always the same value (or a 
compatible partial value).

These two things add lot of power to functional programming, leaving 
all the nice properties of it intact.
A function should not be able to know if the bits are already there, 
for it they are always already there, but from the efficiency point of 
view it can make a huge difference, the difference between streaming a 
file and loading it at once for example.

Clearly invariantF is weaker than invariantW, and you loose some 
optimization possibilities, like putting everything in ROM, but the 
functional core gains a lot of power.

I think that the two concepts are really close enough so that only one 
should be adopted (I don't want an almost_invariant keyword), and I 
think that D should really adopt invariantF concept because that way 
the functional core gets a lot more useful, I want to use pure lazy 
datastructures with a compiler that optimizes them.

Fawzi




More information about the Digitalmars-d mailing list