PROPOSAL: Implicit conversions of integer literals to floating point

Tomek Sowiński just at ask.me
Thu Dec 30 03:11:40 PST 2010


Don <nospam at nospam.com> wrote:
> BACKGROUND:
> D currently uses a very simple rule for parameter matching:
> * it matches exactly; OR
> * it matches using implicit conversions; OR
> * it does not match.
> 
> There's an important extra feature: polysemous literals (those which
> can be interpreted in multiple ways) have a preferred interpretation.
> So 'a' is char (rather than wchar or dchar); 57 is an int (rather than
> short, byte, long, or uint); and 5.0 is a double (rather than float or
> real).
> This feature acts as a tie-breaker in the case of ambiguity. Notice
> that the tie-breaking occurs between closely related types.
> If you implement overloading on any two of the possibilities, you
> would always overload the preferred type anyway. (eg, it doesn't make
> sense to overload 'short' and 'uint' but not 'int'). So this all works
> in a satisfactory way.
> 
> THE PROBLEM:
> Unfortunately, the tie-breaking fails for integer literals used as
> floating-point parameters.
> Consider:
> 
> void foo(double x) {}
> 
> void main()
> {
>    foo(0);
> }
> 
> This compiles correctly; 0 converts to double using implicit
> conversions. Now add:
> void foo(real x) {}
> void foo(float x) {}
> And now the existing code won't compile, because 0 is ambiguous.
> Adding such overloads is a common activity. It is totally unreasonable
> for it to break existing code, since ANY of the overloads would be
> acceptable.
> 
> The language doesn't have any reasonable methods for dealing with
> this. The only one that works at all is to add a  foo(int) overload.
> But it scales very poorly -- if you have 4 floating point parameters,
> you need to add 15 overloads, each with a different combination of int
> parameters. And it's wrong -- it forces you to allow int variables
> to be accepted by the function (but not uint, short, long !) when all
> you really need is for literals to be supported.
> 
> And no, templates are most definitely not a solution, for many reasons
> (they break even more code, they can't be virtual functions, etc)
> 
> This problem has already hit Phobos. We inserted a hack so that
> sqrt(2) will work. But exp(1) doesn't work.
> Note that the problems really arise because we've inherited C's rather
> cavalier approach to implicit conversion.
> 
> PROPOSAL:
> 
> I don't think there's any way around it: we need another level of
> implicit conversion, even if only for the special case of integer
> literals->floating point.
> From the compiler implementation point of view, the simplest way to do
> this would be to add a level between "exact" and "implicit". You might
> call it "matches with preferred conversions", or "match with literal
> conversions".
> 
> A match which involves ONLY conversions from integer literals to
> double, should be regarded as a better match than any match which
> includes any other kind of implicit conversion.
> As usual, if there is more than one "preferred conversion" match, it
> is flagged as ambiguous.
> 
> BTW, I do not think this applies to other literals, or other types. It
> only applies to polsemous types, and int literal->floating point is
> the only such case where there's no tie-breaker in the target type.
> 
> It's a very special case, but a very common and important one. It
> applies to *any* overloaded floating point function. So I think that a
> special case in the language is justified.

I'd cautiously say it is a reasonable proposal not to bother application
programmers with minutiae of library evolution. My concern is that
instead, programmers would have to be familiar with minutiae of D
literals to understand overload resolution.

-- 
Tomek


More information about the Digitalmars-d mailing list