Always false float comparisons

Walter Bright via Digitalmars-d digitalmars-d at puremagic.com
Fri May 13 11:16:29 PDT 2016


On 5/12/2016 10:12 PM, Manu via Digitalmars-d wrote:
> No. Do not.
> I've worked on systems where the compiler and the runtime don't share
> floating point precisions before, and it was a nightmare.
> One anecdote, the PS2 had a vector coprocessor; it ran reduced (24bit
> iirc?) float precision, code compiled for it used 32bits in the
> compiler... to make it worse, the CPU also ran 32bits. The result was,
> literals/constants, or float data fed from the CPU didn't match data
> calculated by the vector unit at runtime (ie, runtime computation of
> the same calculation that may have occurred at compile time to produce
> some constant didn't match). The result was severe cracking and
> visible/shimmering seams between triangles as sub-pixel alignment
> broke down.
> We struggled with this for years. It was practically impossible to
> solve, and mostly involved workarounds.

I understand there are some cases where this is needed, I've proposed intrinsics 
for that.


> I really just want D to use double throughout, like all the cpu's that
> run code today. This 80bit real thing (only on x86 cpu's though!) is a
> never ending pain.

It's 128 bits on other CPUs.


> This sounds like designing specifically for my problem from above,
> where the frontend is always different than the backend/runtime.
> Please have the frontend behave such that it operates on the precise
> datatype expressed by the type... the backend probably does this too,
> and runtime certainly does; they all match.

Except this never happens anyway.


More information about the Digitalmars-d mailing list