Can't add ubytes together to make a ubyte... bug or feature?
bauss
jj_1337 at live.dk
Sat Mar 17 20:08:20 UTC 2018
On Saturday, 17 March 2018 at 18:56:55 UTC, Dominikus Dittes
Scherkl wrote:
> On Saturday, 17 March 2018 at 18:36:35 UTC, Jonathan wrote:
>> On Tuesday, 19 January 2016 at 23:36:14 UTC, Adam D. Ruppe
>> wrote:
>>> On Tuesday, 19 January 2016 at 22:12:06 UTC, Soviet Friend
>>> wrote:
>>>> I don't care if my computer needs to do math on a 4 byte
>>>> basis, I'm not writing assembly.
>>>
>>> x86 actually doesn't need to do math that way, if you were
>>> writing assembly, it would just work. This is just an
>>> annoying rule brought over by C.
>>>
>>>> Can I prevent the initial implicit casts?
>>>
>>> Nope, though you can help tell the compiler that you want it
>>> to fit there by doing stuff like
>>>
>>> ubyte a = 200;
>>> ubyte b = 100;
>>> ubyte c = (a+b)&0xff;
>>>
>>> or something like that, so the expression is specifically
>>> proven to fit in the byte with compile time facts.
>>
>>
>> `(a+b)&0xff` What is this syntax?! Could you give a link to
>> this in the D documentation? I am not even sure how to look
>> it up...
> & is the normal binary and operation, same in C, C++, Java, ...
> 0xFF is a hexadecimal constant (255), which the compiler knows
> fit in an ubyte
> So what do you not understand about this syntax?
I guess he doesn't understand bitwise operations.
Also don't you mean bitwise and?
More information about the Digitalmars-d-learn
mailing list