Is this a bug or a VERY sneaky case?

rempas rempas at
Sat Dec 25 13:32:13 UTC 2021

First of all, I would like to ask if there is a better place to 
make this kind of posts. I'm 99% sure that I have found a bug but 
I still want to ask just to be sure that there isn't happening 
something that I don't know about. So yeah, if there is a place 
that is for this kind of things, please inform me.

Ok so I'm making a library and in one of the functions that 
converts an integer to a string (char*) and returns it, there is 
a weird bug. So the problem is when I try to negate the given 
number and add one to it and then take the result and assign it 
to an unsigned long (ulong) variable. The line goes as follows: 
`ulong fnum = -num + 1;`. So it first does then negation from 
`num` and then adds 1 to the result. To test that my function 
works, I'm using the macros from "limits.h" to check the smallest 
and biggest possible values for each type. Everything seems to 
work great except for "INT_MIN". I don't know why this one but it 
doesn't work as expected. What's weird (and why I think it's 99% 
a bug) is that If I change that one line of code and make it into 
two separate lines, it will work (even tho it's the same thing 
under the hood). What's the change? Well we go from:

`ulong fnum = -num + 1;` to `ulong fnum = -num; fnum++;`

which like I said, is the exact same thing! So what are your 
thoughts? Even if there are more things going one here that I 
don't know, it doesn't make sense to me that everything else 
(including "LONG_MIN" which is a bigger number) works and 
"INT_MIN" doesn't. Also keep in mind that I'm using LDC2 to 
compile because I'm using GCC inline assembly syntax for the 
library so I can't compile with DMD.

In case someone want's to see the full function, you can check 

import core.memory;

import core.stdc.stdio;
import core.stdc.stdlib;
import core.stdc.limits;

alias u8  = ubyte;
alias i8  = byte;
alias u16 = ushort;
alias i16 = short;
alias u32 = uint;
alias i32 = int;
alias u64 = ulong;
alias i64 = long;

enum U8_MAX  = 255;
enum U16_MAX = 65535;
enum U32_MAX = 4294967295;
enum U64_MAX = 18446744073709551615;

enum I8_MIN  = -128;
enum I8_MAX  = 127;
enum I16_MIN = -32768;
enum I16_MAX = 32767;
enum I32_MIN = -2147483648;
enum I32_MAX = 2147483647;
enum I64_MIN = -9223372036854775808;
enum I64_MAX = 9223372036854775807;

enum is_same(alias value, T) = is(typeof(value) == T);

char* to_str(T)(T num, u8 base) {
   if (num == 0) return cast(char*)"0";

   bool min_num = false;

   // Digit count for each size
   // That's not the full code, only the one for
   // signed numbers which is what we want for now
   static if (is_same!(num, i8)) {
     enum buffer_size = 5;
   } else static if (is_same!(num, i16)) {
     enum buffer_size = 7;
   } else static if (is_same!(num, i32)) {
     enum buffer_size = 12;
   } else {
     enum buffer_size = 21;

   // Overflow check
   static if (is_same!(num, i8)) {
     if (num == I8_MIN) {
       min_num = true;
   } else static if (is_same!(num, i16)) {
     if (num == I16_MIN) {
       min_num = true;
   } else static if (is_same!(num, i32)) {
     if (num == I32_MIN) {
       min_num = true;
   } else {
     if (num == I64_MIN) {
       min_num = true;

   char* buf = cast(char*)pureMalloc(buffer_size);
   i32 i = buffer_size;
   u64 fnum;

   if (num < 0) {
     if (min_num) {
       fnum = -num + 1; // This line causes the error
       // It works if used as a separate instructions
       // fnum = -num;
       // fnum++;
     else fnum = -num;
   else fnum = num;

   for(; fnum && i; --i, fnum /= base)
     buf[i] = "0123456789abcdef"[fnum % base];

   if (num < 0) {
     buf[i] = '-';
     return buf + i;

   return buf + (i+1);

extern (C) void main() {
   printf("The value is %d\n",         INT_MIN);
   printf("The value is %s\n",  to_str(INT_MIN, 10));

More information about the Digitalmars-d mailing list