Deprecate implicit conversion between signed and unsigned integers
Walter Bright
newshound2 at digitalmars.com
Thu Feb 6 09:10:41 UTC 2025
[I'm not sure why a new thread was created?]
This comes up now and then. It's an attractive idea, and seems obvious. But I've
always been against it for multiple reasons.
1. Pascal solved this issue by not allowing any implicit conversions. The result
was casts everywhere, which made the code ugly. I hate ugly code.
2. Java solve this by not having an unsigned type. People went to great lengths
to emulate unsigned behavior. Eventually, the Java people gave up and added it.
3. Is `1` a signed int or an unsigned int?
4. What happens with `p[i]`? If p is the beginning of a memory object, we want i
to be unsigned. If p points to the middle, we want i to be signed. What should
be the type of `p - q`? signed or unsigned?
5. We rely on 2's complement overflow semantics to get the same behavior if i is
signed or unsigned, most of the time.
6. Casts are a blunt instrument that impair readability and can cause unexpected
behavior when changing a type in a refactoring. High quality code avoids the use
of explicit casts as much as possible.
7. C behavior on this is extremely well known.
8. The Value Range Propagation feature was a brilliant solution, that resolved
most issues with implicit signed and unsigned conversions, without causing any
problems.
9. Array bounds checking tends to catch the usual bugs with conflating signed
with unsigned. Array bounds checking is a total winner of a feature.
Andrei and I went around and around on this, pointing out the contradictions.
There was no solution. There is no "correct" answer for integer 2's complement
arithmetic.
Here's what I do:
1. use unsigned if the declaration should never be negative.
2. use size_t for all pointer offsets
3. use ptrdiff_t for deltas of size_t that could go negative
4. otherwise, use signed
Stick with those and most of the problems will be avoided.
More information about the dip.ideas
mailing list