D's treatment of values versus side-effect free nullary functions
Rainer Deyke
rainerd at eldwood.com
Mon Jul 26 19:26:46 PDT 2010
On 7/26/2010 18:30, Jim Balter wrote:
> Consider some code that calls a pure function that uses a low-overhead
> exponential algorithm when the parameter is small, and otherwise calls a
> pure function that uses a high-overhead linear algorithm. The calling
> code happens not to be CTFEable and thus the test, even though it only
> depends on a constant, is not computed at compile time. The compiler
> sees two calls, one to each of the two functions, with the same
> parameter passed to each, but only one of the two will actually be
> called at run time. Trying to evaluate the low-overhead exponential
> algorithm with large parameters at compile time would be a lose without
> a timeout to terminate the attempt. It might be best if the compiler
> only attempts CTFE if the code explicitly requests it.
It seems to me that you're describing something like this:
if (x < some_limit) {
return some_function(x);
} else {
return some_other_function(x);
}
This does not pose a problem, assuming 'some_limit' is a compile-time
constant. If 'x' is known at compile, the test can be performed at
compile time. If 'x' is not known at compile time, neither of the
function invocations can be evaluated at compile time.
The problem does exist in this code:
if (complex_predicate(x)) {
return some_function(x);
} else {
return some_other_function(x);
}
...but only if 'complex_predicate' is not a candidate for CTFE but the
other functions are. (This can happen if 'complex_predicate' performs
any type of output, including debug logging, so the scenario is not
entirely unlikely.)
Actually I'm not entirely sure if CTFE is even necessary for producing
optimal code. The optimizations enabled by CTFE seem like a subset of
those enabled by aggressive inlining combined with other common
optimizations.
--
Rainer Deyke - rainerd at eldwood.com
More information about the Digitalmars-d
mailing list