No more implicit conversion real->complex?!

Don Clugston dac at nospam.com.au
Tue Mar 21 08:10:52 PST 2006


Rioshin an'Harthen wrote:
> "Don Clugston" <dac at nospam.com.au> wrote in message 
> news:dvotc1$489$1 at digitaldaemon.com...
>> Rioshin an'Harthen wrote:
>>> I doubt they'd become that much more complicated. Currently, the DMD 
>>> compiler has to look up all the possible implicit conversions, and if 
>>> there's more than one possible conversion, error out because it can't 
>>> know which one.
>>>
>>> Now, since it knows the types in question - and the type of the value 
>>> being passed to the function, it's not that much more to do, IMHO. 
>>> Basically, if it would error with an amiguous implicit conversion, do a 
>>> search of the minimum of the possible types that are larger than the 
>>> current type. If this doesn't match any type, do the error, otherwise 
>>> select the type. A simple search during compile time is all that's 
>>> required if we encounter more than one possible implicit conversion, to 
>>> select the one that is the smallest of the possible ones.
>> (a) your scheme would mean that float->cfloat (64 bits) is preferred over 
>> float->real (80 bits) on x86 CPUs.
> 
> Hmm... yes, a slight problem in my logic. Still fixable, though. Let's 
> introduce a concept of "family" into this, with a family consisting of:
> 
> A: void
> B: char, wchar, dchar
> C: bool
> D: byte, short, int, long, cent
> E: ubyte, ushort, uint, ulong, ucent
> F: float, double, real
> G: ifloat, idouble, ireal
> H: cfloat, cdouble, creal
> 
> etc.
> 
> Now, allow implicit conversion upwards in a family, and between families 
> only if impossible to convert inside the family.

A name I was using instead of "family" was "archetype". Ie, 
archetype!(char) = dchar, archetype!(ifloat) = ireal.

That would mean that a uint would prefer to be converted to a ulong than 
to an int. That might cause problems. Maybe.

> This would fix this problem, but it would introduce a different level of 
> complexity into it. It might be worth it, or then again it might not. It's 
> for Walter to decide at a later point. I'd like to have implicit conversion 
> between real and complex numbers - there's a ton of occasions I've used it, 
> so I'm trying to voice some thoughts into the matter on how to preserve 
> those.

How have you been using implicit conversion? Are you talking about in 
functions, or in expressions?

real r;
creal c;

c += r;
c = 2.0;

I think this could be OK. That is, assignment of a real to a creal could 
still be possible, without an implicit conversion.
After all, there are no complex literals, so
creal c = 2 + 3i;
should be the same as
c = 2
c += 3i;

>> (b) since the size of real is not fixed, the result of the function lookup 
>> could depend on what CPU it's being compiled for!
> 
> True, real is not fixed in size. But according to the D specifications it is 
> the "largest hardware implemented floating point size", and I take it to 
> mean it can't be less in size than double. If a real and a double is the 
> same size, there's no problem, and even less of one if real is larger.

Yes, the only issue is that, for example, real is bigger than cfloat on 
x86, but the same size on PowerPC. And on a machine with 128-bit reals, 
a real could be the same size as a cdouble.

> 
>> (c) What if the function has more than one argument?
>>
>> It might be better to just include a tie-break for the special cases of
>> real types -> complex types, and imaginary -> complex. But I think case 
>> (c) is a serious problem anyway.
>>
>> given
>> func( real, creal ) // #1
>> func( creal, real ) // #2
>> func( creal, creal) // #3
>>
>> should func(7.0, 5.0)
>> match #1, #2, or  #3 ?
> 
> Well, this is a problem, there's no doubt about it. As I take the example, 
> the intention is that at least one of the arguments of the function has to 
> be complex, and #1 and #2 are more like optimized versions than #3. This is 
> still ambiguous.
> 
> If we'd go by symmetry, having it match #3, then the question could be posed 
> as:
> 
> given
> func( real, creal ) // #1
> func( creal, real ) // #2
> 
> should func( 7.0, 5.0 )
> match #1 or #2.
> 
> Still, I would go for the symmetrical - if any one parameter is implicitly 
> converted, first try a version where as many as possible of the parameters 
> are implicitly converted, unless a cast( ) has been used to explicitly mark 
> a type. So I say (in this case) match #3 - I may be utterly wrong, but it's 
> the feeling I have.

So one possibility would be to change the lookup rules to be:
* an exact match
* OR an unambiguous match with implicit conversions, not including 
real->creal, ireal->creal (and possibly not including inter-family 
conversions)
* OR an unambiguous match with implicit conversions, *including* 
real->creal, ireal->creal (possibly including other inter-family 
conversions, like char->short).
* OR it does not match.

which is a little more complicated than the existing D rules, but not by 
much.

> No, we haven't. And probably we never will. But I think we'd be making some 
> progress into solving the problem, maybe making it easier for others in the 
> long run to be able to get it more right than we have.

In practice, it might cover 95% of the use cases.

> <humourous>
> Hmm, how about ditching float, double and real, as well as the imaginary 
> versions? Only going for the complex types - now that'd be a way to solve 
> this problem! ;)
> </humourous>

Or we could stick to ints. Microsoft dropped 80-bit reals, why not 
continue the trend and abolish floating point entirely.



More information about the Digitalmars-d mailing list