Should this be correct behaviour?

Don Clugston dac at nospam.com.au
Fri Nov 30 04:15:47 PST 2007


Janice Caron wrote:
> On 11/29/07, Walter Bright <newshound1 at digitalmars.com> wrote:
>>>     float[] f = new float[1];
>>>     float[] g = f.dup;
>>>     assert(f == g); /* Passes */
>>>     assert(f[0] == g[0]); /* Fails */
>>>
>>> My question is, shouldn't the first assert also fail?
>>>
>>> Put another way, how can two arrays be considered equal, if their
>>> elements are not considered equal?
>> You are comparing two array *references* which point to the same array.
>> The two references clearly are the same.
> 
> I don't see why. I duped the array.
> 
>> (f == g) does not compare the contents of the arrays.
> 
> I don't understand that. It must do. Isn't that how == works for arrays?
> 
>>> I realise that everything is behaving according to spec. But is it sensible?
>> Yes:
>>
>> float a;
>> float b = a;
>> assert(b == a); // fails
>>
>> And this is how floating point works.
> 
> I meant, is it sensible that (f == g), given that (f[0] == g[0]) is false?

Whenever that situation happens, we also have  (f == f) even when (f[0] == f[0]) 
is false. It's really quite unfortunate that the normal floating point == is 
defined to be false for NaNs, rather than being a simple bitwise comparison. It 
wrecks generic code, and it eliminates huge classes of optimisation 
opportunities. IMHO, a different operator should have been invented; but that's 
the fault of IEEE754, not D. At least we can control the damage by restricting 
the unhelpful behaviour to the built-in FP types. A function call could be 
provided for the case when you really want it to fail if there are any NaNs in 
either of the arrays. But "containsNaN(arr[])" is probably more useful in that 
situation anyway.



More information about the Digitalmars-d mailing list