bigfloat II
Georg Wrede
georg.wrede at iki.fi
Tue Apr 21 00:12:30 PDT 2009
Don wrote:
> Paul D. Anderson wrote:
>> Joel C. Salomon Wrote:
>>
>>> Paul D. Anderson wrote:
>>>> Multiplying two floats produces a number whose potential precision
>>>> is the sum of the operands' precision. We need a method to determine
>>>> what the precision of the product should be. Not that it's difficult
>>>> to come up with an answer -- but we have to agree on something.
>>> Not more precision than the input data deserve. The decimal
>>> floating-point numbers of the new IEEE 754-2008 carry with them a notion
>>> of âhow precise is this result?â; this might be a good starting
>>> point
>>> for discussion.
>>>
>>> âJoel Salomon
>>
>> The implementation will comply with IEEE 754-2008. I just wanted to
>> illustrate that precision can depend on the operation as well as the
>> operands.
>>
>> Paul
>
> I'm not sure why you think there needs to be a precision depending on
> the operation.
> The IEEE 754-2008 has the notion of "Widento" precision, but AFAIK it's
> primarily to support x87 -- it's pretty clear that the normal mode of
> operation is to use the precision which is the maximum precision of the
> operands. Even when there is a Widento mode, it's only provided for the
> non-storage formats (Ie, 80-bit reals only).
>
> I think it should operate the way int and long does:
> typeof(x*y) is x if x.mant_dig >= y.mant_dig, else y.
> What you might perhaps do is have a global setting for adjusting the
> default size of a newly constructed variable. But it would only affect
> constructors, not temporaries inside expressions.
Well, if you're doing precise arithmetic, you need a different number of
significant digits at different parts of a calculation.
Say, you got an integer of n digits (which obviously needs n digits of
precision in the first place). Square it, and all of a sudden you need
n+n digits to display it precisely. Unless you truncate it to n digits.
But then, taking the square root would yield something else than the
original.
To improve the speed, one could envision having the calculations always
use a suitable precision.
This problem of course /doesn't/ show when doing integer arithmetic,
even at "unlimited" precision, or with BCD, which usually is fixed
point. But to do variable precision floating point (mostly in software
because we're wider than the hardware) the precision really needs to
vary. And also that's where "precision deserved by the input" comes from.
For instance 12300000000000 has only three digits of precision.
Combining all these things in a math library is an interesting task. But
done right, it could mean faster calculations (especially with very big
or small numbers), and even a more accurate result (when truncating
decimal parts in the wrong situations is avoided).
---
Yesterday I watched a clip on YouTube about the accretion disk around a
black hole. The caption said the black hole was 16km in diameter.
I could cry. The guys who originally made the simulation meant that it's
less than a hundred miles, but more than one, and that's when a
scientist says ten. Then the idiots who translate this to sane units
look up the conversion factor, and print the result.
16 has two digits of precision, where the original only had one.
(Actually even less, but let's not get carried away...) Even using 15km
would have used only 1 1/2 digits of precision, which would have been
prefereable. Lucky they didn't write the caption as 16.09km or 16km 90
meters. That would really have been conjuring up precision out of thin air.
The right caption would have said 10km. Only that would have retained
the [non-]precision, and also given the right idea to the reader.
(Currently the reader is left thinking, "Oh, I guess it then would look
somehow different had it been 12 or 20 km instead, since the size was
given so accurately.")
---
Interestingly, the Americans do the inverse of this. They say 18 months
ago when they only mean "oh, more than a year, and IIRC, less than two
years". Or, 48 hours ago, when they mean the day before yesterday in
general.
I guess they're 99.9999999999998% wrong.
More information about the Digitalmars-d
mailing list