dmd 1.046 and 2.031 releases
Derek Parnell
derek at psych.ward
Sun Jul 5 23:53:30 PDT 2009
On Sun, 05 Jul 2009 23:35:24 -0700, Walter Bright wrote:
> Derek Parnell wrote:
>> One of the very much appreciated updates here is "Implicit integral
>> conversions that could result in loss of significant bits are no longer
>> allowed.". An excellent enhancement, thank you.
>
> Thank Andrei for that, he was the prime mover behind it.
Yes, our English language is poor. I should have said "thank yous" ;-)
>> But I am confused as this below compiles without complaint...
>> -----------
>> import std.stdio;
>> void main()
>> {
>> byte iii;
>> ubyte uuu = 250;
>> iii = uuu;
>> writefln("%s %s", iii, uuu);
>> }
>> -----------
>>
>> Output is ...
>> -6 250
>>
>> But I expected the compiler to complain that an unsigned value cannot by
>> implicitly converted to a signed value as that results in loss of
>> *significant* bits.
>
> We tried for a long time to come up with a sensible way to deal with the
> signed/unsigned dichotomy. We finally gave that up as unworkable.
> Instead, we opted for a method of significant bits, *not* how those bits
> are interpreted. -6 and 250 are the same bits in byte and ubyte, the
> difference is interpretation.
I am disappointed. I hope that you haven't stopped working on a solution to
this though, as allowing D to silently permit bugs it could prevent is not
something we are hoping for.
I can see that the argument so far hinges on the meaning of "significant".
I was hoping that a 'sign' bit would have been significant.
As for "the same bits in X and Y, the different is interpretation", this is
something that can be selective. For example ...
----------
short iii;
struct U {align (1) byte a; byte b;}
U uuu;
iii = uuu;
----------
The bits in 'uuu' can be accommodated in 'iii' so why not allow implicit
conversion? Yes, that is a rhetorical question. Because we know that the
struct means something different to the scalar 'short', conversion via
bit-mapping is not going to be valid in most cases. However, we also know
that a signed value is not the same as an unsigned value even though they
have the same number of bits; that is the compiler already knows how to
interpret those bits.
I'm struggling to see why the compiler cannot just disallow any
signed<->unsigned implicit conversion? Is it a matter of backward
compatibility again?
--
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell
More information about the Digitalmars-d-announce
mailing list