Detecting inadvertent use of integer division
Don
nospam at nospam.com
Tue Dec 15 07:36:48 PST 2009
Steven Schveighoffer wrote:
> On Tue, 15 Dec 2009 03:02:01 -0500, Don <nospam at nospam.com> wrote:
>
>> Phil Deets wrote:
>>> On Mon, 14 Dec 2009 04:57:26 -0500, Don <nospam at nospam.com> wrote:
>>>
>>>> In the very rare cases where the result of an integer division was
>>>> actually intended to be stored in a float, an explicit cast would be
>>>> required. So you'd write:
>>>> double y = cast(int)(1/x);
>>> To me,
>>> double y = cast(double)(1/x);
>>> makes more sense. Why cast to int?
>>
>> That'd compile, too. But, it's pretty confusing to the reader, because
>> that code will only set y == -1.0, +1.0, +0.0, -0.0, or else create a
>> divide by zero error. So I'd recommend a cast to int.
>
> I agree with Phil, in no situation that I can think of does:
>
> T i, j;
>
> T k = i/j;
> U k = cast(T)i/j;
>
> Make sense. I'd expect to see cast(U) there.
>
> You can think of it as i/j returns an undisclosed type that implicitly
> casts to T, but not U, even if T implicitly casts to U.
>
> Wow, this is bizarre.
Yeah. (However, mixing integer division with floating point is bizarre
to start with. As I mentioned somewhere, I don't think I've ever
actually seen it).
> I like the idea, but the recommendation to cast to int makes no sense to
> me. I'd also actually recommend this instead:
>
> auto y = cast(double)(1/x);
Fair enough, you won't see the recommendation to cast to int anywhere.
But it doesn't really matter much. The important thing is, that by
inserting *some* cast, you've drawn attention to the operation.
Hopefully, the fact that you got a compiler error will inspire you to
put a comment in the code, as well <g>.
Personally, I'd always rewrite such a thing as:
int z = 1/x; // Note: integer division!!
double y = z;
Completely separating the integer and floating-point parts.
> On the idea as a whole, I think it's very sound. Note that the only
> case where it gets ugly (i.e. requiring casts) is when both operands of
> division are symbols, since it's trivial to turn an integer literal into
> a floating point.
Exactly. And I think that's the situation where it happens in practice.
Normally, integer literals can be used as floating point literals. This
is the one case where integer and floating-point literals have a
completely different meaning.
More information about the Digitalmars-d
mailing list