D's treatment of values versus side-effect free nullary functions
Don
nospam at nospam.com
Sun Jul 25 13:56:21 PDT 2010
Jim Balter wrote:
>
> "Rainer Deyke" <rainerd at eldwood.com> wrote in message
> news:i2g3oo$von$1 at digitalmars.com...
>> On 7/24/2010 15:34, Jim Balter wrote:
>>> The point about difficulty goes to why this is not a matter of the
>>> halting problem. Even if the halting problem were decidable, that would
>>> not help us in the slightest because we are trying to solve a
>>> *practical* problem. Even if you could prove, for every given function,
>>> whether it would halt on all inputs, that wouldn't tell you which ones
>>> to perform CTFE on -- we want CTFE to terminate before the programmer
>>> dies of old age. The halting problem is strictly a matter of theory;
>>> it's ironic to see someone who has designed a programming language based
>>> on *pragmatic* rather than theoretical considerations to invoke it.
>>
>> That's exactly backwards. It's better to catch errors at compile time
>> than at run time. A program that fails to terminate and fails to
>> perform I/O is a faulty program. (A function that performs I/O is
>> obviously not a candidate for CTFE.) I'd rather catch the faulty
>> program by having the compiler lock up at compile time than by having
>> the compiled program lock up after deployment. Testing whether the
>> program terminates at compile time by attempting to execute the program
>> at compile time is a feature, not a bug.
>
> You have a good point, and that point would imply that whether a
> function would terminate, or how long it would take in general, isn't
> relevant to the decision of whether CTFE should be done. But there are
> (at least) two problems: 1) you can't be certain that the code will be
> run at run time at all -- in generic code you could easily have function
> invocations with constant values that would fail in various ways but the
> function is never run with those values because of prior tests. But CTFE
> wouldn't be nearly as useful if it is performed only for code that you
> can be certain will run. If you can't be certain, then you need a
> conservative approach, and you must not report errors that might never
> occur at run time; if you can be certain, then you could forge ahead at
> compile time no matter how long the computation would take. But: 2) You
> do not have the debug facilities at compile time that you have at run
> time. If the program stalls at run time, you can attach a debugger to it
> and find out what it's doing. But if CTFE is running at compile time and
> the compiler stalls, you don't know why ... unless the compiler has a
> mechanism such that you can interrupt it and it can report an execution
> trace of the CTFE code. That still is not enough -- you really need
> full debug capabilites to trace the code, all available at compile time.
> That's just too much.
The D plugin for Eclipse included a compile-time debugger (!)
More information about the Digitalmars-d
mailing list