floats default to NaN... why?
F i L
witte2008 at gmail.com
Sat Apr 14 09:38:20 PDT 2012
On Saturday, 14 April 2012 at 15:44:46 UTC, Jerome BENOIT wrote:
>
>
> On 14/04/12 16:47, F i L wrote:
>> Jerome BENOIT wrote:
>>> Why would a compiler set `real' to 0.0 rather then 1.0, Pi,
>>> .... ?
>>
>> Because 0.0 is the "lowest" (smallest, starting point, etc..)
>
> quid -infinity ?
The concept of zero is less meaningful than -infinity. Zero is
the logical starting place because zero represents nothing
(mathematically), which is inline with how pointers behave (only
applicable to memory, not scale).
> numerical value. Pi is the corner case and obviously has to be
> explicitly set.
>>
>> If you want to take this further, chars could even be
>> initialized to spaces or newlines or something similar.
>> Pointers/references need to be defaulted to null because they
>> absolutely must equal an explicit value before use. Value
>> types don't share this limitation.
>>
>
> CHAR set are bounded, `real' are not.
Good point, I'm not so convinced char should default to " ". I
think there are arguments either way, I haven't given it much
thought.
>>> The more convenient default set certainly depends on the
>>> underlying mathematics,
>>> and a compiler cannot (yet) understand the encoded
>>> mathematics.
>>> NaN is certainly the certainly the very choice as whatever
>>> the involved mathematics,
>>> they will blow up sooner or later. And, from a practical
>>> point of view, blowing up is easy to trace.
>>
>> Zero is just as easy for the runtime/compiler to default to;
>
> Fortran age is over.
> D compiler contains a lot of features that are not easy to set
> up by the compiler BUT meant for easing coding.
>
>
> and bugs can be introduce anywhere in the code, not just
> definition.
>
> so the NaN approach discard one source of error.
Sure, the question then becomes "does catching bugs introduced by
inaccurately defining a variable outweigh the price of
inconsistency and learning curve." My opinion is No, expected
behavior is more important. Especially when I'm not sure I've
ever heard of someone in C# having bugs that would have been
helped by defaulting to NaN. I mean really, how does:
float x; // NaN
...
x = incorrectValue;
...
foo(x); // first time x is used
differ from:
float x = incorrectValue;
...
foo(x);
in any meaning full way? Except that in this one case:
float x; // NaN
...
foo(x); // uses x, resulting in NaNs
...
x = foo(x); // sets after first time x is used
you'll get a "more meaningful" error message, which, assuming you
didn't just write a ton of FP code, you'd be able to trace to
it's source faster.
It just isn't enough to justify defaulting to NaN, IMO. I even
think the process of hunting down bugs is more straight forward
when defaulting to zero, because every numerical bug is pursued
the same way, regardless of type. You don't have to remember that
FP specifically causes this issues in only some cases.
More information about the Digitalmars-d-learn
mailing list