approxEqual() has fooled me for a long time...

Don nospam at nospam.com
Wed Oct 20 14:28:07 PDT 2010


Fawzi Mohamed wrote:
> 
> On 20-ott-10, at 20:53, Don wrote:
> 
>> Andrei Alexandrescu wrote:
>>> On 10/20/10 10:52 CDT, Don wrote:
>>>> I don't think it's possible to have a sensible default for absolute
>>>> tolerance, because you never know what scale is important. You can do a
>>>> default for relative tolerance, because floating point numbers work 
>>>> that
>>>> way (eg, you can say they're equal if they differ in only the last 4
>>>> bits, or if half of the mantissa bits are equal).
>>>>
>>>> I would even think that the acceptable relative error is almost always
>>>> known at compile time, but the absolute error may not be.
>>> I wonder if it could work to set either number, if zero, to the 
>>> smallest normalized value. Then proceed with the feqrel algorithm. 
>>> Would that work?
>>> Andrei
>>
>> feqrel actually treats zero fairly. There are exactly as many possible 
>> values almost equal to zero, as there are near any other number.
>> So in terms of the floating point number representation, the behaviour 
>> is perfect.
>>
>> Thinking out loud here...
>>
>> I think that you use absolute error to deal with the difference 
>> between the computer's representation, and the real world. You're 
>> almost pretending that they are fixed point numbers.
>> Pretty much any real-world data set has a characteristic magnitude, 
>> and anything which is more than (say) 10^^50 times smaller than the 
>> average is probably equivalent to zero.
> 
> The thing is two fold, from one thing, yes numbers 10^^50 smaller are 
> not important, but the real problem is another, you will probably add 
> and subtract numbers of magnitude x, on this operation the *absolute* 
> error is x*epsilon.
> 
> Note that the error is relative to the magnitude of the operands, not of 
> the result, it is really an absolute error.

You have just lost precision.
BTW -- I haven't yet worked out if we are disagreeing with each other, 
or not.

> Now the end result might have a relative error, but also an absolute 
> error whose size depends on the magnitude of the operands.
> If the result is close to 0 the absolute error is likely to dominate, 
> and checking the relative error will fail.

I don't understand what you're saying here. If you encounter 
catastrophic cancellation, you really have no idea what the correct 
answer is.

> This is the case for example for matrix multiplication.
> In NArray I wanted to check the linar algebra routines with matrixes of 
> random numbers, feqrel did fail too much for number close to 0.

You mean, total loss of precision is acceptable for numbers close to zero?

What it is telling you is correct: your routines have poor numerical 
behaviour near zero. feqrel is failing when you have an ill-conditioned 
matrix.

> Obviously the right thing as Walter said is to let the user choose the 
> magnitude of its results.

This is also what I said, in the post you're replying to: it depends on 
the data.

> In the code I posted I did choose simply 0.5**(mantissa_bits/4) which is 
> smaller than 1 but not horribly so.

> One can easily make that an input parameter (it is the shift parameter 
> in my code)

I don't like the idea of having an absolute error by default. Although 
it is sometimes appropriate, I don't think it should be viewed as 
something which should always be included. It can be highly misleading.

I guess that was the point of the original post in this thread.


More information about the Digitalmars-d mailing list