Always false float comparisons

Ethan Watson via Digitalmars-d digitalmars-d at puremagic.com
Tue May 17 23:15:08 PDT 2016


On Wednesday, 18 May 2016 at 05:40:57 UTC, Walter Bright wrote:
> That wasn't my prescription. My prescription was either 
> changing the algorithm so it was not sensitive to exact 
> bits-in-last-place, or to use roundToFloat() and 
> roundToDouble() functions.

With Manu's example, that would have been a good old fashioned 
matrix multiply to transform a polygon vertex from local space to 
screen space, with whatever other values were required for the 
render effect. The problem there being that the hardware itself 
only calculated 24 bits of precision while dealing with 32 bit 
values. Such a solution was not an option.

Gaming hardware has gotten a lot less cheap and nasty. But Manu 
brought it up because it is conceptually the same problem as 
32/64 bit run time values vs 80 bit compile time values. Every 
solution offered here either comes down to "rewrite your code" or 
"increase code complexity", neither of which is often an option 
(changing the code in Manu's example required a seven+ hour 
compile time each iteration of the code; and being a very hot 
piece of code, it needed to be as simple as possible to maintain 
speed). Unlike the hardware, game programming has not gotten less 
cheap nor nasty. We will cheat our way to the fastest performing 
code using whatever trick we can find that doesn't cause the 
whole thing to come crashing down. The standard way for creating 
float values at compile time is to calculate them manually at the 
correct precision and put a #define in with that value. Being 
unable to specify/override compile time precision means that the 
solution is to declare enums in the exact same manner, and might 
result in more maintenance work if someone decides they want to 
switch from float to double etc. for their value.


More information about the Digitalmars-d mailing list