D's treatment of values versus side-effect free nullary functions
Don
nospam at nospam.com
Fri Jul 23 13:54:30 PDT 2010
Walter Bright wrote:
> Andrei Alexandrescu wrote:
>> Walter Bright wrote:
>>> Don wrote:
>>>> While running the semantic on each function body, the compiler could
>>>> fairly easily check to see if the function is CTFEable. (The main
>>>> complication is that it has to guess about how many iterations are
>>>> performed in loops). Then, when a CTFEable function is called with
>>>> compile-time constants as arguments, it could run CTFE on it, even
>>>> if it is not mandatory.
>>>
>>> I think this is the halting problem, and is insoluble.
>>
>> On what basis?
>
> Trying to determine by static analysis if a function would evaluate
> within a "reasonable" amount of time, or even if it would finish at all.
>
> So let's suppose the compiler just attempts CTFE on those functions
> anyway, with a timeout if they take too long. Suddenly we've just thrown
> all the compiler performance out the window.
But it doesn't need to perfectly solve the complete halting problem. It
can be of benefit if it just identifies some functions which can be
trivially shown to halt, and ignores any which it is unsure about.
For example, a function which contains no loops and no recursive calls
is always CTFEable.
Admittedly, that probably doesn't include many functions which wouldn't
be inlined anyway. But still, while checking for which functions are
inlinable, the compiler could check for 'trivially CTFEable' at the same
time.
If a CTFEable call is made to an inlinable function, it's probably
quicker to CTFE it rather than inline + optimize. Assuming of course
that we have a fast implementation of CTFE.
I completely agree that there will always be opportunities for CTFE
which will be missed unless you allow the compile time to become
arbitrarily long.
More information about the Digitalmars-d
mailing list