No more implicit conversion real->complex?!
Don Clugston
dac at nospam.com.au
Tue Mar 21 04:59:43 PST 2006
Rioshin an'Harthen wrote:
> "Don Clugston" <dac at nospam.com.au> wrote in message
> news:dvoklp$2pmq$2 at digitaldaemon.com...
>> Rioshin an'Harthen wrote:
>>> "Don Clugston" <dac at nospam.com.au> wrote in message
>>> news:dvobhe$2cm6$1 at digitaldaemon.com...
>>>
>>> I've been thinking about this ever since the last discussion about this,
>>> and believe there might be a better solution to the problem at hand than
>>> disabling real -> creal implicit conversions.
>>>
>>> Since the compiler knows the storage requirements of the different types,
>>> and if multiple implicit conversions are possible (e.g. the above
>>> mentioned sin( creal ) and sin( real )), why not make the compiler choose
>>> the one with the least storage requirement (i.e. changing the original
>>> number the least).
>>>
>>> So if we have
>>>
>>> creal sin( creal c );
>>> real sin( real r );
>>>
>>> writefln( sin( 3.2 ) );
>>>
>>> as above, the 3.2 is according to specs a double, and we don't find
>>> double sin( double d ) anywhere, we try implicit conversions. Now, we
>>> have two options: creal sin( creal c ) and real sin( real r ). The
>>> storage requirement of creal is larger than that of real, so conversion
>>> to real changes the original double less than conversion to creal. Thus,
>>> the compiler chooses to convert it into real.
>>>
>>> Naturally, we can help the compiler make the choice:
>>>
>>> writefln( sin( cast( creal ) 3.2 ) );
>>>
>>> would naturally pick the creal version (since 3.2 has been cast to it).
>>>
>>> What are your thoughts about this? Could this work? And if this could,
>>> should this be added to the D specifications?
>> This would mean the lookup rules become more complicated. I think Walter
>> was very keen to keep them simple.
>
> I doubt they'd become that much more complicated. Currently, the DMD
> compiler has to look up all the possible implicit conversions, and if
> there's more than one possible conversion, error out because it can't know
> which one.
>
> Now, since it knows the types in question - and the type of the value being
> passed to the function, it's not that much more to do, IMHO. Basically, if
> it would error with an amiguous implicit conversion, do a search of the
> minimum of the possible types that are larger than the current type. If this
> doesn't match any type, do the error, otherwise select the type. A simple
> search during compile time is all that's required if we encounter more than
> one possible implicit conversion, to select the one that is the smallest of
> the possible ones.
(a) your scheme would mean that float->cfloat (64 bits) is preferred
over float->real (80 bits) on x86 CPUs.
(b) since the size of real is not fixed, the result of the function
lookup could depend on what CPU it's being compiled for!
(c) What if the function has more than one argument?
It might be better to just include a tie-break for the special cases of
real types -> complex types, and imaginary -> complex. But I think case
(c) is a serious problem anyway.
given
func( real, creal ) // #1
func( creal, real ) // #2
func( creal, creal) // #3
should func(7.0, 5.0)
match #1, #2, or #3 ?
<genuinequestion>
And if "none, it's still ambiguous", have we really solved the problem?
</genuinequestion>
>
> Approximately (in some kind of pseudo-code):
>
> type tryImplicitConversion( type intype, type[] implicit_conversion )
> {
> least_type = illegal_type; // illegal type, if we can't find any other -
> size is defined as
> // maximum
> foreach( type in implicit_conversion )
> {
> if( intype.sizeof > type.sizeof )
> continue; // we're not interested in types that are *less* in
> size than our
> // input type
>
> if( least_type.sizeof < type.sizeof )
> continue; // nor are we interested in larger types than
> necessary
>
> least_type = type; // ok, this is the smallest type we can convert
> to
> // (we've found so far)
> }
>
> return least_type;
> }
>
>
More information about the Digitalmars-d
mailing list