Detecting inadvertent use of integer division

Don nospam at nospam.com
Mon Dec 14 01:57:26 PST 2009


Consider this notorious piece of code:

assert(x>1);
double y = 1 / x;

This calculates y as the reciprocal of x, if x is a floating-point 
number. But if x is an integer, an integer division is performed instead 
of a floating-point one, and y will be 0.

It's a very common newbie trap, but I find it still catches me 
occasionally, especially when dividing two variables or compile-time 
constants.

In the opPow thread there were a couple of mentions of inadvertent 
integer division, and how Python is removing this error by making / 
always mean floating-point division, and introducing a new operator for 
integer division.

We could largely eliminate this type of bug without doing anything so 
drastic. Most of the problem just comes from C's cavalier attitude to 
implicit casting. All we'd need to do is tighten the implicit conversion 
rules for int->float, in the same way that the int->uint rules have been 
tightened:

"If an integer expression has an inexact result (ie, involves an inexact 
integer divison), that expression cannot be implicitly cast to a 
floating-point type."

(This means that double y = int_val / 1;  is OK, and also:
  double z = 90/3; would be OK. An alternative rule would be:
"If an integer expression involves integer divison, that expression 
cannot be implicitly cast to a floating-point type").

In the very rare cases where the result of an integer division was 
actually intended to be stored in a float, an explicit cast would be 
required. So you'd write:
double y = cast(int)(1/x);

Like the implicit uint->int casts which have recently been disallowed, I 
think this would prevent a lot of bugs, without causing much pain.



More information about the Digitalmars-d mailing list