floats default to NaN... why?
Jonathan M Davis
jmdavisProg at gmx.com
Fri Apr 13 21:40:09 PDT 2012
On Saturday, April 14, 2012 06:00:35 F i L wrote:
> From the FaQ:
> > NaNs have the interesting property in that whenever a NaN is
> > used as an operand in a computation, the result is a NaN.
> > Therefore, NaNs will propagate and appear in the output
> > whenever a computation made use of one. This implies that a NaN
> > appearing in the output is an unambiguous indication of the use
> > of an uninitialized variable.
> >
> > If 0.0 was used as the default initializer for floating point
> > values, its effect could easily be unnoticed in the output, and
> > so if the default initializer was unintended, the bug may go
> > unrecognized.
>
> So basically, it's for debugging? Is that it's only reason? If so
> I'm at loss as to why default is NaN. The priority should always
> be in the ease-of-use IMO. Especially when it breaks a "standard":
>
> struct Foo {
> int x, y; // ready for use.
> float z, w; // messes things up.
> float r = 0; // almost always...
> }
>
> I'm putting this in .Learn because I'm not really suggesting a
> change as much as trying to learn the reasoning behind it. The
> break in consistency doesn't outweigh any cost of "debugging"
> benefit I can see. I'm not convinced there is any. Having core
> numerical types always and unanimously default to zero is
> understandable and consistent (and what I'm use too with C#). The
> above could be written as:
>
> struct Foo {
> float z = float.nan, ...
> }
>
> if you wanted to guarantee the values are set uniquely at
> construction. Which seems like a job better suited for unittests
> to me anyways.
>
> musing...
Types default to the closest thing that they have to an invalid value so that
code blows up as soon as possible if you fail to initialize a variable to a
proper value and so that it fails deterministically (unlike when variables
aren't initialized and therefore have garbage values).
NaN is the invalid value for floating point types and works fantastically at
indicating that you screwed up and failed to initialize or assign your
variable a proper value. null for pointers and references works similarily
well.
If anything, the integral types and bool fail, because they don't _have_
invalid values. The closest that they have is 0 and false respectively, so
that's what they get. It's the integral types that are inconsistent, not the
floating point types.
It was never really intended that variables would be default initialized with
values that you would use. You're supposed to initialize them or assign them
to appropriate values before using them. Now, since the default values are
well-known and well-defined, you can rely on them if you actually _want_ those
values, but the whole purpose of default initialization is to make code fail
deterministically when variables aren't properly initialized - and to fail as
quickly as possible.
- Jonathan M Davis
More information about the Digitalmars-d-learn
mailing list