sqrt(2) must go

Manu turkeyman at gmail.com
Fri Oct 21 00:53:10 PDT 2011


On 21 October 2011 09:00, Don <nospam at nospam.com> wrote:

> On 21.10.2011 05:24, Robert Jacques wrote:
>
>> On Thu, 20 Oct 2011 09:11:27 -0400, Don <nospam at nospam.com> wrote:
>> [snip]
>>
>>> I'd like to get to the situation where those overloads can be added
>>> without breaking peoples code. The draconian possibility is to disallow
>>> them in all cases: integer types never match floating point function
>>> parameters.
>>> The second possibility is to introduce a tie-breaker rule: when there's
>>> an ambiguity, choose double.
>>> And a third possibility is to only apply that tie-breaker rule to
>>> literals.
>>> And the fourth possibility is to keep the language as it is now, and
>>> allow code to break when overloads get added.
>>>
>>> The one I really, really don't want, is the situation we have now:
>>> #5: whenever an overload gets added, introduce a hack for that
>>> function...
>>>
>>
>> I agree that #5 and #4 not acceptable longer term solutions. I do
>> CUDA/GPU programming, so I live in a world of floats and ints. So
>> changing the rules does worries me, but mainly because most people don't
>> use floats on a daily basis, which introduces bias into the discussion.
>>
>
> Yeah, that's a valuable perspective.
> sqrt(2) is "I don't care what the precision is".
> What I get from you and Manu is:
> if you're working in a float world, you want float to be the tiebreaker.
> Otherwise, you want double (or possibly real!) to be the tiebreaker.
>
> And therefore, the
>
>
>> Thinking it over, here are my suggestions, though I'm not sure if 2a or
>> 2b would be best:
>>
>> 1) Integer literals and expressions should use range propagation to use
>> the thinnest loss-less conversion. If no loss-less conversion exists,
>> then an error is raised. Choosing double as a default is always the
>> wrong choice for GPUs and most embedded systems.
>> 2a) Lossy variable conversions are disallowed.
>> 2b) Lossy variable conversions undergo bounds checking when asserts are
>> turned on.
>>
>
> The spec says: "Integer values cannot be implicitly converted to another
> type that cannot represent the integer bit pattern after integral
> promotion."
> Now although that was intended to only apply to integers, it reads as if
> it should apply to floating point as well.
>
>
>  The idea behind 2b) would be:
>>
>> int i = 1;
>> float f = i; // assert(true);
>> i = int.max;
>> f = i; // assert(false);
>>
>
> That would be catastrophically slow.
>
> I wonder how painful disallowing lossy conversions would be.
>

1: Seems reasonable for literals; "Integer literals and expressions should
use range propagation to use
the thinnest loss-less conversion"... but can you clarify what you mean by
'expressions'? I assume we're talking strictly literal expressions?

2b: Does runtime bounds checking actually addresses the question; which of
an ambiguous function to choose?
If I read you correctly, 2b suggests bounds checking the implicit cast for
data loss at runtime, but which to choose? float/double/real? We'll still
arguing that question even with this proposal taken into consideration... :/
Perhaps I missed something?

Naturally all this complexity assumes we go with the tie-breaker approach,
which I'm becoming more and more convinced is a bad plan...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20111021/4400e784/attachment-0001.html>


More information about the Digitalmars-d mailing list