Why there is too many uneccessary casts?

captaindet 2krnk at gmx.net
Tue Jun 11 17:48:08 PDT 2013


On 2013-06-11 07:35, Adam D. Ruppe wrote:
> On Tuesday, 11 June 2013 at 10:12:27 UTC, Temtaime wrote:
>> ubyte k = 10;
>>ubyte c = k + 1;
>>
>> This code fails to compile because of: Error: cannot implicitly
>> convert expression (cast(int)k + 1) of type int to ubyte
>
> The reason is arithmetic operations transform the operands into ints,
> that's why the error says cast(int)k. Then it thinks int is too big
> for ubyte. It really isn't about overflow, it is about truncation.
>
> That's why uint + 1 is fine. The result there is still 32 bits so
> assigning it to a 32 bit number is no problem, even if it does
> overflow. But k + 1 is promoted to int first, so it is a 32 bit
> number and now the compiler complains that you are trying to shove it
> into an 8 bit variable. Unless it can prove the result still fits in
> 8 bits, it complains, and it doesn't look outside the immediate line
> of code to try to prove it. So it thinks k can be 255, and 255 + 1 =
> 256, which doesn't fit in 8 bits.
>
> The promotion to int is something D inherited from C and probably
> isn't going anywhere.

i think part of the problem is that '1' is an int. so the calculation must be promoted to integer.
if we had byte and ubyte integer literals (suffix b/B and ub/UB?), then if all RHS arguments are (unsigned) bytes the compiler could infer that we are serious with sticking to bytes...

ubyte k = 10;	// or optional 10ub
ubyte c = k + 1ub;


More information about the Digitalmars-d-learn mailing list