No more implicit conversion real->complex?!

Rioshin an'Harthen rharth75 at hotmail.com
Tue Mar 21 10:00:07 PST 2006


"Don Clugston" <dac at nospam.com.au> wrote in message 
news:dvp8ie$k5c$1 at digitaldaemon.com...
> Rioshin an'Harthen wrote:
>> "Don Clugston" <dac at nospam.com.au> wrote in message 
>> news:dvotc1$489$1 at digitaldaemon.com...
>>> (a) your scheme would mean that float->cfloat (64 bits) is preferred 
>>> over float->real (80 bits) on x86 CPUs.
>>
>> Hmm... yes, a slight problem in my logic. Still fixable, though. Let's 
>> introduce a concept of "family" into this, with a family consisting of:
>>
>> A: void
>> B: char, wchar, dchar
>> C: bool
>> D: byte, short, int, long, cent
>> E: ubyte, ushort, uint, ulong, ucent
>> F: float, double, real
>> G: ifloat, idouble, ireal
>> H: cfloat, cdouble, creal
>>
>> etc.
>>
>> Now, allow implicit conversion upwards in a family, and between families 
>> only if impossible to convert inside the family.
>
> A name I was using instead of "family" was "archetype". Ie, 
> archetype!(char) = dchar, archetype!(ifloat) = ireal.
>
> That would mean that a uint would prefer to be converted to a ulong than 
> to an int. That might cause problems. Maybe.

Well, I don't see it as a problem. I'm a member of the faction of firm 
believers in always casting conversions between signed types and unsigned 
ones.


>> This would fix this problem, but it would introduce a different level of 
>> complexity into it. It might be worth it, or then again it might not. 
>> It's for Walter to decide at a later point. I'd like to have implicit 
>> conversion between real and complex numbers - there's a ton of occasions 
>> I've used it, so I'm trying to voice some thoughts into the matter on how 
>> to preserve those.
>
> How have you been using implicit conversion? Are you talking about in 
> functions, or in expressions?
>
> real r;
> creal c;
>
> c += r;
> c = 2.0;
>
> I think this could be OK. That is, assignment of a real to a creal could 
> still be possible, without an implicit conversion.
> After all, there are no complex literals, so
> creal c = 2 + 3i;
> should be the same as
> c = 2
> c += 3i;


I've been using implicit casts to complex numbers in many situations - most 
of the time in expressions, but quite often in function calls, as well. 
Thus, I've been thinking of ways to make the implicit casts work.

>>> (b) since the size of real is not fixed, the result of the function 
>>> lookup could depend on what CPU it's being compiled for!
>>
>> True, real is not fixed in size. But according to the D specifications it 
>> is the "largest hardware implemented floating point size", and I take it 
>> to mean it can't be less in size than double. If a real and a double is 
>> the same size, there's no problem, and even less of one if real is 
>> larger.
>
> Yes, the only issue is that, for example, real is bigger than cfloat on 
> x86, but the same size on PowerPC. And on a machine with 128-bit reals, a 
> real could be the same size as a cdouble.

I think this problem would go away if we take into account the "family" or 
archetype of a type in the cast, since we prefer to cast to any larger type 
of those having the same archetype (or being in the same family). Only if 
this is not possible, would we cast to a type outside the family.


>>> (c) What if the function has more than one argument?
>>>
>>> It might be better to just include a tie-break for the special cases of
>>> real types -> complex types, and imaginary -> complex. But I think case 
>>> (c) is a serious problem anyway.
>>>
>>> given
>>> func( real, creal ) // #1
>>> func( creal, real ) // #2
>>> func( creal, creal) // #3
>>>
>>> should func(7.0, 5.0)
>>> match #1, #2, or  #3 ?
>>
>> Well, this is a problem, there's no doubt about it. As I take the 
>> example, the intention is that at least one of the arguments of the 
>> function has to be complex, and #1 and #2 are more like optimized 
>> versions than #3. This is still ambiguous.
>>
>> If we'd go by symmetry, having it match #3, then the question could be 
>> posed as:
>>
>> given
>> func( real, creal ) // #1
>> func( creal, real ) // #2
>>
>> should func( 7.0, 5.0 )
>> match #1 or #2.
>>
>> Still, I would go for the symmetrical - if any one parameter is 
>> implicitly converted, first try a version where as many as possible of 
>> the parameters are implicitly converted, unless a cast( ) has been used 
>> to explicitly mark a type. So I say (in this case) match #3 - I may be 
>> utterly wrong, but it's the feeling I have.
>
> So one possibility would be to change the lookup rules to be:
> * an exact match
> * OR an unambiguous match with implicit conversions, not including 
> real->creal, ireal->creal (and possibly not including inter-family 
> conversions)
> * OR an unambiguous match with implicit conversions, *including* 
> real->creal, ireal->creal (possibly including other inter-family 
> conversions, like char->short).
> * OR it does not match.
>
> which is a little more complicated than the existing D rules, but not by 
> much.

This is sounding like what I was thinking.

>> No, we haven't. And probably we never will. But I think we'd be making 
>> some progress into solving the problem, maybe making it easier for others 
>> in the long run to be able to get it more right than we have.
>
> In practice, it might cover 95% of the use cases.

True, and I think 95% is good enough for most. In the rest 5% where the 
implicit cast is not good enough (the compiler replies with an error 
message), it's just time to use an explicit cast.

>> <humourous>
>> Hmm, how about ditching float, double and real, as well as the imaginary 
>> versions? Only going for the complex types - now that'd be a way to solve 
>> this problem! ;)
>> </humourous>
>
> Or we could stick to ints. Microsoft dropped 80-bit reals, why not 
> continue the trend and abolish floating point entirely.

I would hope we'd be smarter than Microsoft... :) 





More information about the Digitalmars-d mailing list