Bug in ^^

Brett Brett at gmail.com
Tue Sep 17 16:50:18 UTC 2019


On Tuesday, 17 September 2019 at 14:21:33 UTC, John Colvin wrote:
> On Tuesday, 17 September 2019 at 13:48:02 UTC, Brett wrote:
>> On Tuesday, 17 September 2019 at 02:38:03 UTC, jmh530 wrote:
>>> On Tuesday, 17 September 2019 at 01:53:12 UTC, Brett wrote:
>>>> 10^^16 = 1874919424	???
>>>>
>>>> 10L^^16 is valid, but
>>>>
>>>> enum x = 10^^16 gives wrong value.
>>>>
>>>> I didn't catch this ;/
>>>
>>> 10 and 16 are ints. The largest int is 2147483647, which is 
>>> several orders of magnitude below 1e16. So you can think of 
>>> it as wrapping around multiple times and that is the 
>>> remainder: 1E16 - (10214748367 + 1) * 4656612 = 1874919424
>>>
>>> Probably more appropriate for the Learn forum.
>>
>>
>> Um, duh, but the problem why are they ints?
>>
>> It is a compile time constant, it doesn't matter the size, 
>> there are no limitations in type size at compile time(in 
>> theory).
>>
>> For it to wrap around silently is error prone and can 
>> introduce bugs in to programs.
>>
>> The compiler should always use the largest value possible and 
>> if appropriate cast down, an enum is not appropriate to cast 
>> down to int.
>>
>> The issue is not how 32-bit math works BUT that it is using 
>> 32-bit math by default(and my app is 64-bit).
>>
>> Even if I use ulong as the type it still computes it in 
>> 32-bit. It should not do that, that is the point. It's wrong 
>> and bad behavior.
>>
>> Else, what is the difference of it first calculating in L and 
>> then casting down and wrapping silently? It's the same problem 
>> yet if I do that in a program it will complain about 
>> precision, yet it does not do that here.
>>
>> Again, just so it is clear, it has nothing to do with 32-bit 
>> arithmetic but that 32-bit arithmetic is used as instead of 
>> 64-bit. I could potentially go with it in a 32-bit program but 
>> not in 64-bit, but even then it would be difficult because it 
>> is a constant... it's shorthand for writing out the long 
>> version, it shouldn't silently wrap, If I write out the long 
>> version it craps out so why not the computation itself?
>>
>>
>> Of course I imagine you still don't get it or believe me so I 
>> can prove it:
>>
>>
>> enum x = 100000000000000000;
>> enum y = 10^^17;
>>
>> void main()
>> {
>>    ulong a = x;
>>    ulong b = y;
>>
>> }
>>
>> What do you think a and b are, do you think they are the same 
>> or different?
>>
>> Do you think they *should* be the same or different?
>
> integer literals without any suffixes (e.g. L) are typed int or 
> long based on their size. Any arithmetic done after that is is 
> done according to the same rules as as at runtime.
>
> Roughly speaking:
>
> The process is not:
>     we have an enum, let's work out any and all calculations 
> leading to it with arbitrary size integers and then infer the 
> type of the enum as the smallest that fits it.
>
> The process is:
>     we have an enum, lets calculate it's value using the same 
> logic as at runtime and then type of the enum is the type of 
> the answer.

it doesn't matter, I've already proved that the same mathematical 
equivalence gives two different results... your claim that it is 
an int is unfounded... did you look at the code I gave?

You can make claims about whatever you want but facts are facts.

>> enum x = 100000000000000000;
>> enum y = 10^^17;

Those we should have x==y, no ands buts or anything to justify 
the difference.

no matter how you want to justify the compilers behavior, it is 
wrong. It is ok to accept it, it actually  makes the world a 
better place to accept when something is wrong, that is is the 
only way things can get fixed.


More information about the Digitalmars-d mailing list