Deprecate implicit conversion between signed and unsigned integers
Jonathan M Davis
newsgroup.d at jmdavisprog.com
Wed May 15 02:59:27 UTC 2024
On Sunday, May 12, 2024 7:32:36 AM MDT Paul Backus via dip.ideas wrote:
> D would be a simpler, easier-to-use language if these implicit
> conversions were removed. The first step to doing that is to
> deprecate them.
In my experience, this hasn't been a big enough issue for me to care, and
it's seemed like more of an academic concern than an actual problem, but I
probably just don't typically write the kind of code that runs into problems
because of it.
So, I don't mind the status quo, but I'm also fine with getting rid of such
implicit conversions.
The main question IMHO is how annoying it'll be in practice. The primary
case I can think of where there would likely be problems would be code that
returns -1 for an index with size_t (e.g. some of the Phobos functions do
that when the item being searched for isn't found). It's something that
works perfectly fine in general, but it means comparing a signed type and an
unsigned type. It also sometimes mean explicitly assigning -1 to an unsigned
type. Those can be replace with using the type's max instead, so it's not
the end of the world buy any means, but it will require code changes, and
the result is arguably uglier.
As Steven pointed out though, VRP should still allow the conversion where
appropriate, which should reduce how much code would need to be changed.
A related problem is that the compiler allows implicit conversions between
character types and integer types. And personally, I care about that one far
more and would love to see that changed, but I'm not against the idea of
getting rid of implicit conversions between signed and unsigned integer
types.
- Jonathan M Davis
More information about the dip.ideas
mailing list