Question/request/bug(?) re. floating-point in dmd

Iain Buclaw ibuclaw at ubuntu.com
Wed Nov 6 07:07:26 PST 2013


On 6 November 2013 09:09, Don <x at nospam.com> wrote:

> On Wednesday, 6 November 2013 at 06:28:59 UTC, Walter Bright wrote:
>
>> On 11/5/2013 8:19 AM, Don wrote:
>>
>>> On Wednesday, 30 October 2013 at 18:28:14 UTC, Walter Bright wrote:
>>>
>>>> Not exactly what I meant - I mean the algorithm should be designed so
>>>> that
>>>> extra precision does not break it.
>>>>
>>>
>>> Unfortunately, that's considerably more difficult than writing an
>>> algorithm for
>>> a known precision.
>>> And it is impossible in any case where you need full machine precision
>>> (which
>>> applies to practically all library code, and most of my work).
>>>
>>
>> I have a hard time buying this. For example, when I wrote matrix
>> inversion code, more precision was always gave more accurate results.
>>
>
> With matrix inversion you're normally far from full machine precision. If
> half the bits are correct, you're doing very well.
>
> The situations I'm referring to, are the ones where the result is
> correctly rounded, when no extra precision is present. If you then go and
> add extra precision to some or all of the intermediate results, the results
> will no longer be correctly rounded.
>
> eg, the simplest case is rounding to integer:
> 3.499999999999999999999999999
> must round to 3. If you round it twice, you'll get 4.
>
> But we can test this. I predict that adding some extra bits to the
> internal calculations in CTFE (to make it have eg 128 bit intermediate
> values instead of 80), will cause Phobos math unit tests to break.
> Perhaps this can already be done trivially in GCC.
>
>
>
The only tests that break in GDC because GCC operates on 160 bit
intermediate values are the 80-bit specific tests  (the unittest in
std.math with the comment "Note that these are only valid for 80-bit
reals").

Saying that though, GCC isn't exactly IEEE 754 compliant either...





>  A compiler intrinsic, which generates no code (simply inserting a barrier
>>> for
>>> the optimiser) sounds like the correct approach.
>>>
>>> Coming up for a name for this operation is difficult.
>>>
>>
>> float toFloatPrecision(real arg) ?
>>
>
> Meh. That's wordy and looks like a rounding operation. I'm interested in
> the operation float -> float and double -> double (and perhaps real->real),
> where no conversion is happening, and on most architectures it will be a
> no-op.
>
> It should be a name that indicates that it's not generating any code,
> you're just forbidding the compiler from doing funky weird stuff.
>
> And for generic code, the name should be the same for float, double, and
> real.
>
> Perhaps an attribute rather than a function call.
>
> double x;
> double y = x.strictfloat;
> double y = x.strictprecision;
>
> ie, (expr).strictfloat  would return expr, discarding any extra precision.
> That's the best I've come up with so far.
>

double y = cast(float) x;  ?  :o)


-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20131106/bacbd03e/attachment.html>


More information about the Digitalmars-d mailing list