A signed 1-bit type?

Jonathan M Davis newsgroup.d at jmdavisprog.com
Fri Sep 22 02:54:44 UTC 2023


On Thursday, September 21, 2023 12:55:42 AM MDT monkyyy via Digitalmars-d 
wrote:
> On Thursday, 21 September 2023 at 06:17:16 UTC, Walter Bright
>
> wrote:
> >> D’s booleans, however, are unsigned integer types.
> >
> > Yup. Changing that would break an unknown amount of code.
>
> Correctness really should come first; it wont be nearly as big as
> a breaking change as safe by default and we all know how good an
> idea that is.

And what would do you expect a 1-bit boolean type would buy us? For most
code, the encoding of bool is irrelevant.

IMHO, if there's a problem, it's that bool is treated as an integer type at
all, meaning that you can pass a bool to a function that takes an integer
type without casting it and that 0 and 1 can be passed to a function that
takes bool without casting them (which can become particularly annoying with
Value-Range Propagation, because then something like foo(1) will take the
bool overload of foo rather than the int one).

However, we've argued about this here in the past, and Walter is basically
so used to treating bool as an integer type (which can be useful for code
doing bit operations and math) that I'm not sure that he even really
understood why many of us objected to the idea of treating bool as an
integer type (and even if he fully understood, he did not agree). I guess
that it comes from him having a background in low-level C where doing
bitwise stuff with bools is normal, whereas someone who's dealt more with
languages that treat bool as a non-integer type is much more likely to be
very unhappy with the idea of bool being treated like an integer type. VRP
does make the problem worse than it would be in other languages though,
since it results in more implicit type conversions, and that's what caused a
lot of the previous discussion on the matter IIRC.

But either way, the underlying implementation of bool doesn't really affect
either approach. Whether it's essentially a byte where 0 is false and all
other values are true, whether it's a bit where 0 is false and 1 is true, or
whether it's something else entirely with an opaque implementation doesn't
matter at all if bool is not treated as an integer type. And if it is
treated as an integer type, then whether it's a bit or a byte really doesn't
matter much (it'll usually be promoted to int for math anyway), and the
current implementation follows what C has, which is good for compatibility.

If we did change how bool worked, it would probably be to simply make it not
implicitly convert to and from integer types (as has been discussed in the
past), but there wouldn't be any need to change how it's actually
implemented for that to work. It would just be making it so that without
casting, bool would not convert to and from integer types, which would fix
certain classes of bugs but make some code doing bitwise operations more
tedious (and potentially more bug-prone). Switching to a bit implementation
wouldn't help any of that. But regardless, at this point, I think that it's
pretty clear that D's bool is not going to change, because Walter is very
happy with how it currently works, and it's highly unlikely that someone is
going to come up with an argument good enough to get him to change it and
break any existing code that actually wants to treat bool as integer type.
Much as I'd personally like to see bool changed with regards to implicit
conversions, I think that that ship has long since sailed.

- Jonathan M Davis






More information about the Digitalmars-d mailing list