Right way to show numbers in binary/hex/octal in your opinion?

Rumbu rumbu at rumbu.ro
Wed Dec 29 07:12:02 UTC 2021


On Tuesday, 28 December 2021 at 23:45:17 UTC, Siarhei Siamashka 
wrote:
> On Monday, 27 December 2021 at 12:48:52 UTC, Rumbu wrote:
>> How can you convert 0x8000_0000_0000_0000 to long?
>>
>> And if your response is "use a ulong", I have another one: how 
>> do you convert -0x8000_0000_0000_0000 to ulong.
>
> If you actually care about overflows safety, then both of these 
> conversion attempts are invalid and should raise an exception 
> or allow to handle this error in some different fashion. For 
> example, Crystal language ensures overflows safety and even 
> provides two varieties of string-to-integer conversion methods 
> (the one with '?' in name returns nil on error, the other 
> raises an exception):

I don't care about overflows, I care about the fact that D must 
use the same method when it converts numbers to string and the 
other way around.

Currently D dumps byte.min in hex as "80". But it throws an 
overflow exception when I try to get my byte back from "80".

Fun fact, when I wrote my decimal library, I had millions of 
expected values in a file and some of the decimal-int conversions 
failed according to the tests.  The error source was not me, but 
this line: 
https://github.com/rumbu13/decimal/blob/a6bae32d75d56be16e82d37af0c8e4a7c08e318a/src/test/test.d#L152, but it took me some time to dig through test file and realise that among the values, there are some strings that cannot be parsed in D (the ones starting with "8").

Yes, this can be a solution to dump it as "-80", but the standard 
lib does not even parse the "-" today for other bases than 10.

>
> If you want to get rid of overflow errors, then please consider 
> using a larger 128-bit type or a bigint. Or figure out what's 
> the source of this out-of-range input and fix the problem there.

That's why I gave you the "long" example. We don't have (yet) a 
128-bit type. That was the idea in the first place, language has 
a limited range of numbers. And when we will have the cent, we 
will lack a 256-bit type.

>
> ```D
> import std;
>
> void main() {
>   short a = -1;
>   writeln(a.to!string(16)); // prints "FFFF"
>   long b = 65535;
>   writeln(b.to!string(16)); // prints "FFFF"
> }
> ```
> Both -1 and 65535 become exactly the same string after 
> conversion. How are you going to convert it back?

I would like to consider that I know exactly what kind of value I 
am expecting to read.

>
> Also you haven't provided any answer to my questions from the 
> earlier message, so I'm repeating them again:
>
>  1. How does this "internal representation" logic make sense 
> for the bases, which are not powers of 2?
>

Here you have a point :) I never thought to other bases than 
powers of 2.


>  2. If dumping numbers to strings in base 16 is intended to 
> show their internal representation, then why are non-negative 
> numbers not padded with zeroes on the left side (like the 
> negative numbers are padded with Fs) when converted using 
> Dlang's `to!string`?

They are not padded with F's, that's exactly what the number 
holds in memory as bits.

We are on the same side here, the current to/parse implementation 
is not the best we can get.

Happy New Year :)


More information about the Digitalmars-d mailing list