approxEqual() has fooled me for a long time...

Fawzi Mohamed fawzi at gmx.ch
Wed Oct 20 09:05:34 PDT 2010


On 20-ott-10, at 17:52, Don wrote:

> Andrei Alexandrescu wrote:
>> On 10/20/10 5:32 CDT, Lars T. Kyllingstad wrote:
>>> (This message was originally meant for the Phobos mailing list,  
>>> but for
>>> some reason I am currently unable to send messages to it*.   
>>> Anyway, it's
>>> probably worth making others aware of this as well.)
>>>
>>> In my code, and in unittests in particular, I use  
>>> std.math.approxEqual()
>>> a lot to check the results of various computations.  If I expect my
>>> result to be correct to within ten significant digits, say, I'd  
>>> write
>>>
>>>   assert (approxEqual(result, expected, 1e-10));
>>>
>>> Since results often span several orders of magnitude, I usually  
>>> don't
>>> care about the absolute error, so I just leave it unspecified.  So  
>>> far,
>>> so good, right?
>>>
>>> NO!
>>>
>>> I just discovered today that the default value for approxEqual's  
>>> default
>>> absolute tolerance is 1e-5, and not zero as one would expect.  This
>>> means that the following, quite unexpectedly, succeeds:
>>>
>>>   assert (approxEqual(1e-10, 1e-20, 0.1));
>>>
>>> This seems completely illogical to me, and I think it should be  
>>> fixed
>>> ASAP.  Any objections?
>> I wonder what would be a sensible default. If the default for  
>> absolute error is zero, then you'd have an equally odd behavior for  
>> very small numbers (and most notably zero). Essentially nothing  
>> would be approximately zero.
>> Andrei
>
> I don't think it's possible to have a sensible default for absolute  
> tolerance, because you never know what scale is important. You can  
> do a default for relative tolerance, because floating point numbers  
> work that way (eg, you can say they're equal if they differ in only  
> the last 4 bits, or if half of the mantissa bits are equal).
>
> I would even think that the acceptable relative error is almost  
> always known at compile time, but the absolute error may not be.

I had success in using (the very empiric)

/// feqrel version more forgiving close to 0
/// if you sum values you cannot expect better than T.epsilon absolute  
error.
/// feqrel compares relative error, and close to 0 (where the density  
of floats is high) it is
/// much more stringent.
/// To guarantee T.epsilon absolute error one should use shift=1.0,  
here we are more stingent
/// and we use T.mant_dig/4 digits more when close to 0.
int feqrel2(T)(T x,T y){
     static if(isComplexType!(T)){
         return min(feqrel2(x.re,y.re),feqrel2(x.im,y.im));
     } else {
         const T shift=ctfe_powI(0.5,T.mant_dig/4);
         if (x<0){
             return feqrel(x-shift,y-shift);
         } else {
             return feqrel(x+shift,y+shift);
         }
     }
}

(from blip.narrray.NArrayBasicOps)


More information about the Digitalmars-d mailing list