Integer promotion... what I'm missing? (It's monday...)

Don Clugston dac at nospam.com.au
Wed Jun 28 04:12:14 PDT 2006


Kirk McDonald wrote:
> Max Samuha wrote:
>> On Tue, 27 Jun 2006 12:33:32 -0700, Kirk McDonald
>> <kirklin.mcdonald at gmail.com> wrote:
>>
>>
>>> Alexander Panek wrote:
>>>
>>>> If you take a look at how comparison works, you'll know why this one 
>>>> fails.
>>>>
>>>> Lets take an uint a = 16; as in your example:
>>>> 00000000 00000000 00000000 00010000
>>>>
>>>> And now a signed integer with the value -1:
>>>> 10000000 00000000 00000000 00000001
>>>>
>>>
>>> Your point still stands, but -1 is represented as:
>>> 11111111 11111111 11111111 11111111
>>>
>>> http://en.wikipedia.org/wiki/Two%27s_complement
>>>
>>>
>>>> You might guess which number is bigger, when our comparison is done 
>>>> binary (and after all, that's what the processor does) :)
>>>>
>>>> Regards,
>>>> Alex
>>>>
>>>> Paolo Invernizzi wrote:
>>>>
>>>>
>>>>> Hi all,
>>>>>
>>>>> What I'm missing?
>>>>>
>>>>>    uint a = 16;
>>>>>    int b = -1;
>>>>>    assert( b < a ); // this fails! I was expecting that -1 < 16
>>>>>
>>>>> Thanks
>>>>>
>>>>> ---
>>>>> Paolo
>>
>>
>> Maybe it's not a bug but it is very confusing, no matter how integer
>> operations work internally. Compiler should give at least a warning
>> about incompatibe types, or try to cast uint to int implicitly or
>> require an explicit cast.
> 
> It's worth noting that this behavior (of a being less than b) follows 
> the implicit conversion rules exactly:
> 
> http://www.digitalmars.com/d/type.html
> 
> [snipped non-applicable checks...]
> 5. Else the integer promotions are done on each operand, followed by:
> 
>    1. If both are the same type, no more conversions are done.
>    2. If both are signed or both are unsigned, the smaller type is 
> converted to the larger.
>    3. If the signed type is larger than the unsigned type, the unsigned 
> type is converted to the signed type.
>    4. The signed type is converted to the unsigned type.
> 
> So the int is implicitly converted to the uint, and (apparently) it 
> simply compares 2**32-1 to 16.
> 
> So I wouldn't call this a bug, just a potential oddity. Maybe it should 
> detect and throw an overflow if a negative signed integer is converted 
> to an unsigned type? 

That would introduce a massive performance hit. The existing conversion 
from signed to unsigned occurs only at compile time.

Or not: I'd consider this an edge case. It's
> probably considered bad practice to promiscuously mix signed and 
> unsigned types.

It's a hard one. It would be really painful if equality comparisons 
between signed & unsigned types was an error; it's almost always OK.
Comparison of signed/unsigned variables with an unsigned/signed constant 
is always an error; it would be nice if the compiler detected it.



More information about the Digitalmars-d-learn mailing list