disabling unary "-" for unsigned types

Steven Schveighoffer schveiguy at yahoo.com
Mon Feb 15 15:33:11 PST 2010


On Mon, 15 Feb 2010 17:21:09 -0500, Walter Bright
<newshound1 at digitalmars.com> wrote:

> Steven Schveighoffer wrote:
>> are there any good cases besides this that Walter has?  And even if  
>> there are, we are not talking about silently mis-interpreting it.   
>> There is precedent for making valid C code an error because it is error  
>> prone.
>
>
> Here's where I'm coming from with this. The problem is that CPU integers  
> are 2's complement and a fixed number of bits. We'd like to pretend they  
> work just like whole numbers we learned about in 2nd grade arithmetic.  
> But they don't, and we can't fix it so they do. I think it's ultimately  
> fruitless to try and make them behave other than what they are: 2's  
> complement fixed arrays of bits.
>
> So, we wind up with oddities like overflow, wrap-around,  
> -int.min==int.min. Heck, we *rely* on these oddities (subtraction  
> depends on wrap-around). Sometimes, we pretend these bit values are  
> signed, sometimes unsigned, and we mix together those notions in the  
> same expression.

The counter-point to your point is that a programming language is not fed
to the CPU, it is fed to a compiler.  The compiler must make the most it
can of what it sees in source code, but it can help the user express
himself to the CPU.  The problem is, when you have ambiguous statements,
the compiler can either choose an interpretation or throw an error.
There's nothing wrong with throwing an error if the statement is ambiguous
or nonsensical.

The alternative (which is what we have today) that the actual meaning is
most of the time not what the user wants.

A more graphic example is something like this:

string x = 1;

What did the user mean?  Did he mean, make a string out of 1 and assign it
to x, or did he mistype the type of x?  Throwing an error is perfectly
acceptable here, I don't see why the same isn't true for:

uint x = -1;

> There's no way to not mix up signed and unsigned arithmetic.
>
> Trying to build walls between signed and unsigned integer types is an  
> exercise in utter futility. They are both 2-s complement bits, and it's  
> best to treat them that way rather than pretend they aren't.
>
> As for -x in particular, - is not negation. It's complement and  
> increment, and produces exactly the same bit result for signed and  
> unsigned types. If it is disallowed for unsigned integers, then the user  
> is faced with either:
>
>     (~x + 1)
>
> which not only looks weird in an arithmetic expression, but then a  
> special case for it has to be wired into the optimizer to turn it back  
> into a NEG instruction.

I should clarify, using - on an unsigned value should work, it just should
not be assignable to an unsigned type.  I guess I disagree with the
original statement for this post (that it should be disabled all
together), but I think that the compiler should avoid something that is
99% of the time an error.

i.e.

uint a = -1; // error
uint b = 5;
uint c = -b; // error
int d = -b; // ok
auto e = -b; // e is type int

In the case of literals, I think allowing - on a literal should require
that it be assigned to a signed type or involve a cast.

-Steve



More information about the Digitalmars-d mailing list