Integer semantic in D, what are the tradeoff ? Do we have numbers ?
deadalnix
deadalnix at gmail.com
Sun Dec 16 14:52:43 PST 2012
Following the thread on integer semantic, i wanted to know if
some data are available on the tradeoff we are doing. Let me
first explain what they are.
In D, integer are guaranteed to loop, ie uint.max + 1 == 0 . In
C, the compiler is allowed to consider that operation do not
overflow, which allow some optimization. See 2 examples bellow :
if(a < a + 1) { ... }
On that case, a compiler can decide that the condition is alway
true in C. This may seems like a stupid piece of code, but in
fact this is something that the compiler can do on a regular
basis. You have to keep in mind that the example usually not
appears in a way that is as obvious as this, but after several
code transformation (function inlining, constant propagation, etc
. . .).
Another example is (x + 1) * 2; The compiler may decide to
rewrite it as 2 * x + 2 as this is done in one operation on many
CPU, when (x + 1) * 2 is not always possible. Both are
equivalent, expect if the integer overflow.
As seen, ensuring that integer loop properly is costly (if
someone have numbers, I'd be happy to know how much costly it is).
On the other hand, we have predictable results on integer
computation when they overflow. I wonder how much of a gain it
is. In my experience, most piece of code are doomed anyway when
integer overflow occurs.
An other solution is to use checked integers. But this is even
more costly. This have been discussed quite a lot in the previous
thread, so i want to concentrate to the first issue.
How much performances are sacrificed compared to a looser integer
semantic (I frankly don't know), and how much programs benefit
from it (I suspect very little, but I may be wrong) ?
More information about the Digitalmars-d
mailing list