Strange implicit conversion integers on concatenation

Jonathan M Davis newsgroup.d at jmdavisprog.com
Mon Nov 5 21:11:27 UTC 2018


n Monday, November 5, 2018 9:31:56 AM MST H. S. Teoh via Digitalmars-d 
wrote:
> On Mon, Nov 05, 2018 at 03:58:40PM +0000, Adam D. Ruppe via Digitalmars-d 
wrote:
> > On Monday, 5 November 2018 at 15:36:31 UTC, uranuz wrote:
> > > I believe that following code shouldn't even compile, but it does
> > > and gives non-printable symbol appended at the end of string.
> >
> > Me too, this is a design flaw in the language. Following C's example,
> > int and char can convert to/from each other. So string ~ int will
> > convert int to char (as in reinterpret cast) and append that.
> >
> > It is just the way it is, alas.
>
> I have said before, and will continue to say, that I think implicit
> conversion between char and non-char types in D does not make sense.
>
> In C, converting between char and int is very common because of the
> conflation of char with byte, but in D we have explicit types for byte
> and ubyte, which should take care of any of those kinds of use cases,
> and char is explicitly defined to be a UTF8 code unit.  Now sure, there
> are cases where you want to get at the numerical value of a char --
> that's what cast(int) and cast(char) is for.  But *implicitly*
> converting between char and int, especially when we went through the
> trouble of defining a separate type for char that stands apart from
> byte/ubyte, does not make any sense to me.
>
> This problem is especially annoying with function overloads that take
> char vs. byte: because of implicit conversion, often the wrong overload
> ends up getting called WITHOUT ANY WARNING.  Once, while refactoring
> some code, I changed a representation of an object from char to a byte
> ID, but in order to do the refactoring piecemeal, I needed to overload
> between byte and char so that older code will continue to compile while
> the refactoring is still in progress.  Bad idea.  All sorts of random
> problems and runtime crashes happened because C's stupid int conversion
> rules were liberally applied to D types, causing a gigantic mess where
> you never know which overload will get called. (Well OK, it's
> predictable if you sit down and work it out, but it's just plain
> annoying when a lousy char literal calls the byte overload whereas a
> char variable calls the char overload.)  I ended up having to wrap the
> type in a struct just to stop the implicit conversion from tripping me
> up.

+1

Unfortunately, I don't know how reasonable it is to fix it at this point,
much as I would love to see it fixed. Historically, I don't think that
Walter could have been convinced, but based on some of the stuff he's said
in recent years, I think that he'd be much more open to the idea now.
However, even if he could now be convinced that ideally the conversion
wouldn't exist, I don't know how easy it would be to get a DIP through when
you consider the potential code breakage. But maybe it's possible to do it
in a smooth enough manner that it could work - especially when many of the
kind of cases where you might actually _want_ such a conversion already
require casting anyway thanks to the rules about integer promotions and
narrowing conversions (e.g. when adding or subtracting from chars).
Regardless, it would have to be well-written DIP with a clean transition
scheme. Having that DIP on removing the implicit conversion of integer and
character literals to bool be accepted would be a start in the right
direction though. If that gets rejected (which I sure hope that it isn't),
then there's probably no hope for a DIP fixing the char situation.

- Jonathan M Davis





More information about the Digitalmars-d mailing list