Notes on D future
Reiner Pope
some at address.com
Wed Aug 29 01:36:34 PDT 2007
bearophile wrote:
> I have just read the interesting document "WalterAndrei.pdf" recently linked here. I don't understand all the things it says, but here are few notes:
>
> Page 17-18: the support of pure functions look like an intelligent idea, but I don't know their syntax, how they will be/look, etc.
>
Presumably just an annotation like "pure" on the function should be enough:
pure int add(int x, int y) { return x + y; }
>
> Page 23-25, polysemous values: they will be useful for various things, not just integers. But I don't think they help avoid other kinds of integer overflow/undeflow (some of them are avoided by the Delphi compiler), like adding two big integers that produces wrap-around, etc (eventually such extra cheeks on integer operations can disabled by the -release compilation flag).
>
I don't understand polysemous values as they are explained. In
particular, if it is used in a context where sign doesn't matter, it
compiles without error. In that case, how does the compiler choose
whether it is uint or int? Doesn't the choice matter in any context,
because the types overflow at different places?
And I would have thought that, if the context clarifies the sign, then
the type should just accept the sign determined by the context, ie:
void foo(int i) {...}
unittest
{
uint u;
int i;
auto x = i + u;
foo(x); // ok, so x is an int
}
It seems like this is a more powerful form of type inference, but then
what is this about polysemous function results? Quote, "result type or
error type." Are polysemous types actually algebraic data types (aka
discriminating unions)?
I was going to wait for the video and hope that explained it, but now
that we're talking...
>
> Page 33: I think some of those "optimizations based on constant folding" can be done from just a single function (that the compiler automatically transforms into a kind of "template", copying it as necessary), using some partial compilation on a copy of the function itself. So a function like:
> int foo(int x, int y) {...}
> If called like:
> a = foo(something(), y);
> b = foo(3, y);
> Can be automatically managed by the compiler as it was a pair of:
> int foo(int x, int y) {...}
> int foo(int x)(int y) {...}
>
>
> Page 52: can't the compiler automatically unroll little loops based on constant intervals/loop vars? (another compilation flag like -unroll can added, if necessary). I presume "static foreach" is a way for the programmer to tell the compiler that he/she really wants such unrolling done, so if such unrolling isn't possible, then the compiler must raise a compilation error instead of just silently leave the loop unrolled. At the moment I think tuples are looped with a implicitly static foreach. Is such implicit "static" going to become explicit and necessary? So this raises a compilation error:
> void foo(Types...)(Types args) {
> foreach (arg; args)
> bar(arg);
> ...
> And you must write:
> void foo(Types...)(Types args) {
> static foreach (arg; args)
> bar(arg);
> ...
In both of these two features, they indeed do seem to be optimizations
which the compiler could otherwise do. But I think that's not the point.
Their real use comes from the fact that the variables are compile-time
variables, which means that they can be used in defining types. Often it
is useful to paramaterise a type by an integer; to do this, the integer
must be known at compile time. Writing this:
int foo(int x) {
MyType!(x) t;
}
void main()
{
foo(5);
}
won't work even though x is known to be 5 at compile-time, as the
information of this knowledge isn't transmitted through foo's interface.
Static parameters solve this problem. Admittedly, it could be solved
by making x a template parameter, but there are reasons not to. Consider
the more useful example of a modified writefln. You might have two
prototypes:
writefln(...); // runtime varargs
writefln(static string s, ParamTypes!(s)); // compile-time varargs, with
types checked at compile-time by parsing the formatting string
This still *could* be done automagically by the compiler, but less
likely so. I agree that it would be nice if the compiler did this as an
optimization, though.
-- Reiner
PS. The other related thing I would really like to see is the ability to
manipulate types with standard (ie non-template) syntax. The problem at
the moment is that you can't use CTFE when doing operations on types.
What I would like is some type, for instance Type, which you could use
in normal parameter lists. It would always have to be a static
parameter, but it could be manipulated with arrays, etc. Then,
template Map(alias Fn, T...)
{
static if (T.length == 0)
alias Tuple!() Map;
else
alias Tuple!(Fn!(T[0]).val, Map!(Fn, T[1..$])) Map;
}
would become
Type[] map(alias Fn)(static Type[] types)
{
Type[] res;
foreach (t; types)
res ~= Fn!(t);
return res;
}
I know the details are missing, but I live in hope. :-)
More information about the Digitalmars-d
mailing list