float equality

Jonathan M Davis jmdavisProg at gmx.com
Sun Feb 20 05:37:58 PST 2011


On Sunday 20 February 2011 05:21:12 spir wrote:
> On 02/20/2011 06:17 AM, Jonathan M Davis wrote:
> > On Saturday 19 February 2011 20:46:50 Walter Bright wrote:
> >> bearophile wrote:
> >>> Walter:
> >>>> That'll just trade one set of problems for another.
> >>> 
> >>> But the second set of problems may be smaller :-)
> >> 
> >> There's a total lack of evidence for that. Furthermore,
> >> 
> >> 1. Roundoff error is not of a fixed magnitude.
> >> 
> >> 2. A user may choose to ignore roundoff errors, but that is not the
> >> prerogative of the language.
> >> 
> >> 3. What is an acceptable amount of roundoff error is not decidable by
> >> the language.
> >> 
> >> 4. At Boeing doing design work, I've seen what happens when engineers
> >> ignore roundoff errors. It ain't pretty. It ain't safe. It ain't
> >> correct.
> > 
> > Honestly, the more that I learn about and deal with floating point
> > numbers, the more I wonder why we don't just use fixed point. Obviously
> > that can be a bit limiting for the size of the number (on either side of
> > the decimal) - particularly in 32-bit land - but with 64-bit numbers, it
> > sounds increasingly reasonable given all of the issues with floating
> > point values. Ideally, I suppose, you'd have both, but the CPU
> > specifically supports floating point (I don't know about fixed point),
> > and I don't think that I've ever used a language which really had fixed
> > point values (unless you count integers as fixed point with no digits to
> > the right of the decimal).
> 
> I don't see how fixed point would solve common issues with floats. Would
> you expand a bit on this?
> 
> For me, the source of the issue is inaccurate and unintuitive
> "translations"
> 
> from/to decimal and binary. For instance (using python just for example):
> >>> 0.1
> 
> 0.10000000000000001
> 
> >>> 0.7
> 
> 0.69999999999999996
> 
> >>> 0.3
> 
> 0.29999999999999999
> 
> To solve this, one may use rationals (representing 0.1 by 1/10) or decimals
> (representing decimal digits, each eg by half a byte). Both are costly,
> indeed. I may ignore some other points, solved by fixed point arithmetics.

It may be that you would still end up with situations where two values that you 
would think would be the same aren't due to rounding error or whatnot. However, 
with a fixed point value, you wouldn't have the problem where a particular value 
could not be held in it even if it's within its range of precision. As I 
understand it, there are a number of values which cannot be held in a floating 
point and therefore end up being rounded up or down simply because of how 
floating points work and not because there the precision isn't high enough.

It's definitely true however, that using fractions would be much more accurate 
for a lot of stuff. That wouldn't be particulary efficient though. Still, if you're 
doing a lot of math that needs to be accurate, that may be the way to go.

- Jonathan M Davis


More information about the Digitalmars-d mailing list