Preventing implicit conversion
Adam D. Ruppe via Digitalmars-d-learn
digitalmars-d-learn at puremagic.com
Thu Nov 5 05:23:32 PST 2015
On Thursday, 5 November 2015 at 10:07:30 UTC, Dominikus Dittes
Scherkl wrote:
> And I want to have small number litterals automatically
> choosing the smallest fitting type.
It does, that's the value range propagation at work. Inside one
expression, if the compiler can prove it fits in a smaller type,
the explicit cast is not necessary.
ubyte a = 255; // allowed, despite 255 being an int literal
ubyte b = 253L + 2L; // allowed, though I used longs there
ubyte c = 255 + 1; // disallowed, 256 doesn't fit
However, the key there was "in a single expression". If you break
it into multiple lines with runtime values, the compiler assumes
the worst:
int i = 254;
int i2 = 1;
ubyte a2 = i + i2; // won't work because it doesn't realize the
values
But, adding some constant operation can narrow it back down:
ubyte a3 = (i + i2) & 0xff; // but this does because it knows
anything & 0xff will always fit in a byte
> ubyte b = 1u;
> auto c = b + 1u;
>
> I expect the 1u to be of type ubyte - and also c.
This won't work because of the one-expression rule. In the second
line, it doesn't know for sure what b is, it just knows it is
somewhere between 0 and 255. So it assumes the worst, that it is
255, and you add one, giving 256... which doesn't fit in a byte.
It requires the explicit cast or a &0xff or something like that
to make the bit truncation explicit.
I agree this can be kinda obnoxious (and I think kinda pointless
if you're dealing with explicitly typed smaller things
throughout) but knowing what it is actually doing can help a
little.
More information about the Digitalmars-d-learn
mailing list