[Issue 360] Compile-time floating-point calculations are sometimes inconsistent

d-bugmail at puremagic.com d-bugmail at puremagic.com
Fri Sep 22 00:25:59 PDT 2006


http://d.puremagic.com/issues/show_bug.cgi?id=360





------- Comment #5 from clugdbug at yahoo.com.au  2006-09-22 02:25 -------
(In reply to comment #4)
> while (j <= (1.0f/STEP_SIZE)) is at double precision,
> writefln((j += 1.0f) <= (1.0f/STEP_SIZE)) is at real precision.

I don't understand where the double precision comes from. Since all the values
are floats, the only precisions that make sense are float and reals.

Really, 0.2f should not be the same number as 0.2. When you put the 'f' suffix
on, surely you're asking the compiler to truncate the precision. It can be
expanded to real precision later without problems. Currently, there's no way to
get a low-precision constant at compile time.

(In fact, you should be able to write real a = 0.2 - 0.2f; to get the
truncation error).

Here's how I think it should work:

const float A = 0.2;  // infinitely accurate 0.2, but type inference on A
should return a float.

const float B = 0.2f; // a 32-bit approximation to 0.2
const real C = 0.2; // infinitely accurate 0.2
const real D = 0.2f; // a 32-bit approximation to 0.2, but type inference will
give an 80-bit quantity.


-- 




More information about the Digitalmars-d-bugs mailing list