Always false float comparisons

Walter Bright via Digitalmars-d digitalmars-d at puremagic.com
Tue May 17 23:57:58 PDT 2016


On 5/17/2016 11:15 PM, Ethan Watson wrote:
> With Manu's example, that would have been a good old fashioned matrix multiply
> to transform a polygon vertex from local space to screen space, with whatever
> other values were required for the render effect. The problem there being that
> the hardware itself only calculated 24 bits of precision while dealing with 32
> bit values. Such a solution was not an option.

I don't understand the 24 vs 32 bit value thing. There is no 32 bit mantissa 
floating point type. Floats have 24 bit mantissas, doubles 52.

> Gaming hardware has gotten a lot less cheap and nasty. But Manu brought it up
> because it is conceptually the same problem as 32/64 bit run time values vs 80
> bit compile time values. Every solution offered here either comes down to
> "rewrite your code" or "increase code complexity", neither of which is often an
> option (changing the code in Manu's example required a seven+ hour compile time
> each iteration of the code; and being a very hot piece of code, it needed to be
> as simple as possible to maintain speed). Unlike the hardware, game programming
> has not gotten less cheap nor nasty. We will cheat our way to the fastest
> performing code using whatever trick we can find that doesn't cause the whole
> thing to come crashing down. The standard way for creating float values at
> compile time is to calculate them manually at the correct precision and put a
> #define in with that value. Being unable to specify/override compile time
> precision means that the solution is to declare enums in the exact same manner,
> and might result in more maintenance work if someone decides they want to switch
> from float to double etc. for their value.

I do not understand why the compile time version cannot use roundToFloat() in 
places where it matters. And if the hardware was using a different precision 
than float/double, which appears to have been the case, the code would have to 
be written to account for that anyway.

In any case, the problem Manu was having was with C++. The precision of 
calculations is implementation defined in C++, and does vary all over the place 
depending on compiler brands, compiler versions, compiler switches, and exactly 
how the code is laid out. There can also be differences in how the FP hardware 
works on the compiler host machine and the target machine.

My proposal would make the behavior more consistent than C++, not less.

Lastly, it is hard to make suggestions on how to deal with the problem without 
seeing the actual offending code. There may very well be something else going 
on, or some simple adjustment that can be made.

One way that *will* make the results exactly the same as on the target hardware 
is to actually run the code on the target hardware, save the results to a file, 
and incorporate that file in the build.


More information about the Digitalmars-d mailing list