Regarding the proposed Binray Literals Deprecation
0xEAB
desisma at heidel.beer
Sun Sep 11 01:12:37 UTC 2022
On Saturday, 10 September 2022 at 18:32:26 UTC, Walter Bright
wrote:
> I understand. I suppose it's like learning to touch-type. Takes
> some effort at first, but a lifetime of payoff. There's no way
> to avoid working with binary data without getting comfortable
> with hex.
Might not be generally applicable, but “binary data” and “data
represented by binary digits” aren’t practically the same thing
to me. Like: Do the individual digits of a number impose a
special meaning? Or are they just a vehicle to represent the
number?
If the individual binary digits have a meaning (and the number
they form is only their storage, think: container), viewing them
by their number in a consolidating representation imposes extra
work to derive the individual digits. Because it’s the digits
that matter, the number itself has no meaning. Even if they’re
technically the same.
The opposite would be the case with 8bit RGB color channels:
The digits impose no meaning by themselves. It’s the whole number
that represents the brightness of the channel. Whether the any
digit is 0,1,2,3,… is useless information on its own. It only
serves a purpose when one has the whole number available. The
digits are are only a tool to visualize the number here.
…unless we consolidate multiple channels into one number:
e.g. `#B03931` (= 0xB0_39_31)
While the number itself still represents a specific color (mixed
from the three channels R/G/B), the meaning is implied by looking
at the digits representing (the brightness of) the individual
channels. Once one converts said number (the “color code”) to
decimal (→ 11548977) that meaning is lost.
Another example:
If someone told me that DDRB (Data Direction Register B) of my
ATmega 328p were 144, I’d know nothing while technically I’ve
been told everything. I first have to separate said number to
binary digits; then I will find out what I could have been told
in the first place: PINB4 (digit 4) and PINB7 (digit 7) are set
to 1 (“input”).
Hex form 0x90 might make the separation process easier, but it’s
still a number representing a certain state, not the state data
itself. Binary form however matches perfectly the actual state
data.
Why this differentiation matters:
In case we get one digit wrong, how much a “whole number” is off,
depends on the position of the digit in the number; e.g.
• if we’re talking about a color channel then 255 (xFF) vs 55
(x37) will make a whole lot of a difference, while 255 (xFF) vs
250 (xFA) might be barely visible in real life. Nevertheless, the
“whole” color channel (it’s an atomic thing from this point of
view) is wrong. There’s practically no specific sub-portion that
is wrong (we’re only making up one by partitioning the number
into digits).
• If a digit of the data direction register in my microcontroller
is wrong, only one pin will malfunction (further impact depending
on the application’s circuit; not the best example obviously). In
other words, only a part of the whole thing is wrong.
• If one channel of our RGB color lamp is off by whatever value,
at first glance the whole color might look wrong (because colors
mix in our eyes/brains), but in fact it’s only one LED/bulb of
the three that is actually wrong.
Don’t get me wrong, please. Of course, there is a difference
between binary digits vs decimal digits vs hexadecimal digits.
But again, isolated digits have no actual standalone meaning for
things like brightness of a single color channels. On the other
hand: if we consolidate the binary digits of a register of our
microcontroller to dec or hex, even a single digit being off by a
little will now also make a huge impact (on up to 4 pins!).
In the end it comes down to “atomic” unit we’re looking at. Like
I wrote, if any of the (irrelevant) digits of the value of our
color channel is wrong, the whole channel is wrong.
> (In 8th grade I took a 2 week summer school course in touch
> typing. The typewriters were mechanical monsters, you really
> had to hammer the keys to get it to work, but that helped build
> the muscle memory. Having a lifetime of payoff from that was
> soooo worth the few hours.)
Touch-type is useful to me, too :)
Learning to mentally translate hex to binary patterns isn’t, to
be honest (at least yet; removing binary literals from D would
introduce a potential use case).
If I were asked, hex numbers are only possibly nice for the first
4 bits; beyond these I have to keep track of the position as
well. There we go: On the screen I could at least use a pointer
pencil (if wanted). In my mind there is no such option.
Most of the time, when I’m working with binary data, individual
binary digits (“bits”) of bytes don’t matter. Binary
representation serves no purpose there. If it weren’t for control
characters etc., ISO-8859-15 visualization would work for me as
well…
Memorizing the 16 pattern HEX->BIN table doesn’t really help me
with binary data retrieved from /dev/urandom or when mysql-native
(MySQL/MariaDB client library) decides to return BINARY columns
typed as strings. (In case someone wondered what binary data I
mostly work with, ignoring media files.)
But if someone types "DDRB |= 144;" when programming my
microcontroller, I’ll nervously get my calculator out ;)
More information about the Digitalmars-d
mailing list