Compile time float binary representation
Don
nospam at nospam.com
Sat Aug 1 02:54:20 PDT 2009
Jeremie Pelletier wrote:
> Is there a way to convert a float (or double/real) to an integral number without changing its binary representation at compile time?
>
> I need to extract the sign, exponent and mantissa yet I cant use bit shifting.
> "Error: 'R' is not of integral type, it is a real" is the error I get.
There's a super-hacky way: pass the real as a template value parameter,
and parse the .mangleof it! Not recommended, but it does work.
The other way is to do it with CTFE, subtracting powers of 2 until the
residual is < 1.
Not great either.
>
> The usual *cast(uint*)&value wont work either at compile time.
>
> Any suggestions?
More information about the Digitalmars-d
mailing list