Why there is too many uneccessary casts?
Adam D. Ruppe
destructionator at gmail.com
Tue Jun 11 05:35:32 PDT 2013
On Tuesday, 11 June 2013 at 10:12:27 UTC, Temtaime wrote:
> ubyte k = 10;
> ubyte c = k + 1;
>
> This code fails to compile because of: Error: cannot implicitly
> convert expression (cast(int)k + 1) of type int to ubyte
The reason is arithmetic operations transform the operands into
ints, that's why the error says cast(int)k. Then it thinks int is
too big for ubyte. It really isn't about overflow, it is about
truncation.
That's why uint + 1 is fine. The result there is still 32 bits so
assigning it to a 32 bit number is no problem, even if it does
overflow. But k + 1 is promoted to int first, so it is a 32 bit
number and now the compiler complains that you are trying to
shove it into an 8 bit variable. Unless it can prove the result
still fits in 8 bits, it complains, and it doesn't look outside
the immediate line of code to try to prove it. So it thinks k can
be 255, and 255 + 1 = 256, which doesn't fit in 8 bits.
The promotion to int is something D inherited from C and probably
isn't going anywhere.
More information about the Digitalmars-d-learn
mailing list