Right way to show numbers in binary/hex/octal in your opinion?

Siarhei Siamashka siarhei.siamashka at gmail.com
Mon Dec 27 09:55:46 UTC 2021

On Monday, 27 December 2021 at 06:55:37 UTC, Rumbu wrote:
> When people are dumping numbers to strings în any other base 
> than 10, they are expecting to see the internal representation 
> of that number.

Different people may have different expectations and their 
expectations may be not the same as yours.

How does this "internal representation" logic make sense for the 
bases, which are not powers of 2? Okay, base 10 is a special 
snowflake, but what about the others?

If dumping numbers to strings in base 16 is intended to show 
their internal representation, then why are non-negative numbers 
not padded with zeroes on the left side (like the negative 
numbers are padded with Fs) when converted using Dlang's 

As for my expectations, each digit in a base 3 number may be used 
to represent a chosen branch in a ternary tree (similar to how 
each digit in a base 2 number may represent a chosen branch in a 
binary tree). The other bases are useful in a similar way. This 
has nothing to do with the internal representation.

> Since the sign doesn't have a reserved bit in the 
> representation of integrals (like it has for floats), for me it 
> doesn't make any sense if I see a negative sign before a hex, 
> octal or binary value.

Why does the internal representation have to leak out and cause 
artificial troubles/inconsistencies, when these 
troubles/inconsistencies are trivially avoidable?

> The trickiest value for integrals is the one with the most 
> significant bit set (e.g. 0x80). This can be -128 for byte, but 
> also 128 for any other type than byte. Now, if we go the other 
> way around and put a minus before 0x80, how do we convert it 
> back to byte? If we assume that 0x80 is always 128, -0x80 will 
> be -128 and can fit a byte. On the other side, you cannot store 
> +0x80 in a byte because is out of range.

I don't understand what's the problem here. It can be easily 
solved by having a unit test, which verifies that "-0x80" gets 
correctly converted to -128. Or have I missed something?

> This is also an issue în phobos:
> https://issues.dlang.org/show_bug.cgi?id=20452
> https://issues.dlang.org/show_bug.cgi?id=18290

To me this looks very much like just a self inflicted damage and 
historical baggage, entirely caused by making wrong choices in 
the past.

More information about the Digitalmars-d mailing list