Bug in ^^

Brett Brett at gmail.com
Tue Sep 17 17:34:18 UTC 2019

On Tuesday, 17 September 2019 at 16:49:46 UTC, Vladimir Panteleev 
> On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
>> 10^^16 = 1874919424	???
>> 10L^^16 is valid, but
>> enum x = 10^^16 gives wrong value.
>> I didn't catch this ;/
> The same can be observed with multiplication:
> // This compiles, but the result is "non-sensical" due to 
> oveflow.
> enum n = 1_000_000 * 1_000_000;
> The same can happen with C:
> static const int n = 1000000 * 1000000;
> However, C compilers warn about this:
> gcc:
> test.c:1:30: warning: integer overflow in expression of type 
> ‘int’ results in ‘-727379968’ [-Woverflow]
>     1 | static const int n = 1000000 * 1000000;
>       |                              ^
> clang:
> test.c:1:30: warning: overflow in expression; result is 
> -727379968 with type 'int' [-Winteger-overflow]
> static const int n = 1000000 * 1000000;
>                              ^
> 1 warning generated.
> I think D should warn about any overflows which happen at 
> compile-time too.

I have no problem with warnings, at least it would then be 
detected rather than a silent fall through that can make things 

What's more concerning to me is how many people defend the 
compilers behavior.


enum x = 100000000000000000;
enum y = 10^^17;

should produce two different results is moronic to me. I realize 
that 10^^17 is a computation but at the compile time the compiler 
should use the maximum precision to compute values since it 
actually can do this without issue(up to the a point).

If enums actually are suppose to be int's then it should give an 
error about overflow. If enums can scale depending on what the 
compiler see's fit then it should use L here and when the values 
are used in the program it should then error because they will be 
to large when stuck in to ints.

Regardless of the behavior, it shouldn't produce silent 
undetectable errors, which is what I have seen at least 4 people 
advocate in here right of the bat. rather than have a sane 
solution that prevents those errors. That is very concerning... 
why would anyone think allowing undetectable errors to be 
reasonable behavior? I actually don't care how it works, as long 
as I know how it works. If it forces me to add an L, so be it, 
not a big deal. If it causes crashes in my application and I have 
to spend hours trying to figure out because I made a logical 
assumption and the compiler made a different logical assumption 
but both are equally viable, then that is a problem and it should 
be understood as a problem, not my problem, not but the compilers 
problem. Compilers are suppose to make our lives easier, not 

More information about the Digitalmars-d mailing list