[Issue 1977] Relax warnings for implicit narrowing conversions caused by promotions

d-bugmail at puremagic.com d-bugmail at puremagic.com
Tue Nov 25 07:02:34 PST 2008


http://d.puremagic.com/issues/show_bug.cgi?id=1977





------- Comment #18 from schveiguy at yahoo.com  2008-11-25 09:02 -------
(In reply to comment #17)
> > And most believe that:
> > 
> > byte b2 = b + b;
> > 
> > should produce b2 == -128 without error, and should be equivalent semantically
> > to:
> > 
> > byte b2 = b;
> > b2 += b;
> > 
> > We don't want adding 2 bytes together to result in a byte result in all cases,
> > only in cases where the actual assignment or usage is to a byte.
> 
> Well the "most" part doesn't quite pan out, and to me it looks like the
> argument fails here. For one thing, we need to eliminate people who accept Java
> and C#. They would believe that what their language does is the better thing to
> do.

Just because people use a language, doesn't mean they agree with every
decision.  In searching for this issue on C# blogs and message boards, the
overwhelming majority prefers no error to the oversafe current implementation. 
The defenders of the current rules invariably use the case of adding two bytes
together and assigning to an integer, their argument being that if you have the
result of adding two bytes be a byte, then the integer result is a truncated
byte.  If we eliminate that case from contention, as my solution has done, I
think you'd be hard pressed to find anyone who thinks the loss of data errors
are still needed in the cases such as the one that spawned this discussion.

> Also, C and C++ are getting that right by paying a very large cost - of
> allowing all narrowing integral conversions. I believe there is a reasonable
> level of agreement that automatic lossy conversions are not to be encouraged.
> This puts C and C++ behind Java and C# in terms of "getting it right".

I agree, general narrowing conversions should be failed.  It's just in the case
of where arithmetic has artificially promoted the result where we disagree.

> 
> > What if we defined several 'internal' types that were only used by the
> > compiler?
> > 
> > pbyte -> byte promoted to an int (represented as an int internally)
> > pubyte -> ubyte promoted to an int
> > pshort -> short promoted to an int
> > pushort -> ushort promoted to an int
> > etc...
> 
> IMHO not enough rationale has been brought forth on why this *should* be
> implemented. It would make D implement an arcane set of rules for an odd, if
> any, benefit. 

Probably, it isn't that critical to the success of D that this be implemented. 
If I had to choose something to look at, this probably wouldn't be it.  This is
just one of those little things that seems unnecessary and annoying more than
it is blocking.  It shows up seldom enough that it probably isn't worth the
trouble to fix.  But I have put my solution forth, and as far as I can tell,
you didn't find anything wrong with it, and that's about all I can do.

> A better problem to spend energy on is the signed <-> unsigned morass. We've
> discussed that many times and could not come up with a reasonable solution. For
> now, D has borrowed the C rule "if any operand is unsigned then the result is
> unsigned" leading to the occasional puzzling results known from C and C++.
> Eliminating those fringe cases without losing compatibility with C and C++ is a
> tough challenge.

Indeed.  Without promoting to a larger type, I think you are forced to take
this course of action.  When adding an int to a uint, who wants it to wrap
around to a negative value?  I can't think of a better solution.


-- 



More information about the Digitalmars-d-bugs mailing list