Loss of precision errors in FP conversions

dsimcha dsimcha at yahoo.com
Tue Apr 19 17:02:31 PDT 2011


On 4/19/2011 7:49 PM, bearophile wrote:
> In Bugzilla I have just added an enhancement request that asks for a little change in D, I don't know if it was already discussed or if it's already present in Bugzilla:
> http://d.puremagic.com/issues/show_bug.cgi?id=5864
>
> In a program like this:
>
> void main() {
>      uint x = 10_000;
>      ubyte b = x;
> }
>
>
> DMD 2.052 raises a compilation error like this, because the b=x assignment may lose some information, some bits of x:
>
> test.d(3): Error: cannot implicitly convert expression (x) of type uint to ubyte
>
> I think that a safe and good system language has to help avoid unwanted (implicit) loss of information during data conversions.
>
> This is a case of loss of precision where D generates no compile errors:
>
>
> import std.stdio;
> void main() {
>      real f1 = 1.0000111222222222333;
>      writefln("%.19f", f1);
>      double f2 = f1; // loss of FP precision
>      writefln("%.19f", f2);
>      float f3 = f2; // loss of FP precision
>      writefln("%.19f", f3);
> }
>
> Despite some information is lost, see the output:
> 1.0000111222222222332
> 1.0000111222222223261
> 1.0000110864639282226
>
> So one possible way to face this situation is to statically disallow double=>float, real=>float, and real=>double conversions (on some computers real=>double conversions don't cause loss of information, but I suggest to ignore this, to increase code portability), and introduce compile-time errors like:
>
> test.d(5): Error: cannot implicitly convert expression (f1) of type real to double
> test.d(7): Error: cannot implicitly convert expression (f2) of type double to float
>
>
> Today float values seem less useful, because with serial CPU instructions the performance difference between operations on float and double is often not important, and often you want the precision of doubles. But modern CPUs (and current GPUs) have vector operations too. They are currently able to perform operations on 4 float values or 2 double values (or 8 float or 4 doubles) at the same time for each instruction. Such vector instructions are sometimes used directly in C-GCC code using SSE intrinsics, or come out of auto-vectorization optimization of loops done by GCC on normal serial C code. In this situation the usage of float instead of double gives almost a twofold performance increase. There are programs (like certain ray-tracing code) where the precision of a float is enough. So a compile-time error that catches currently implicit double->float conversions may help the programmer avoid unwanted usages of doubles that allow the compiler to pack 4/8 floats in a vecto
r register during loop vectorizations.
>
>
> Partially related note: currently std.math doesn't seem to use the cosf, sinf C functions, but it uses sqrtf:
>
> import std.math: sqrt, sin, cos;
> void main() {
>      float x = 1.0f;
>      static assert(is(typeof(  sqrt(x)  ) == float)); // OK
>      static assert(is(typeof(  sin(x)   ) == float)); // ERR
>      static assert(is(typeof(  cos(x)   ) == float)); // ERR
> }
>
> Bye,
> bearophile

Please, _NOOOOOOO!!_  The integer conversion errors are already arguably 
too pedantic, make generic code harder to write and get in the way about 
as often as they help.  Floating point tends to degrade much more 
gracefully than integer.  Where integer narrowing can just be silently, 
non-obviously and completely wrong, floating point narrowing will at 
least be approximately right, or become infinity and be wrong in an 
obvious way.  I know what you suggest could prevent bugs in a lot of 
cases, but it also has the potential to get in the way in a lot of cases.

Generally I worry about D's type system becoming like the Boy Who Cried 
Wolf, where it flags so many potential errors (as opposed to things that 
are definitely errors) that people become conditioned to just put in 
whatever casts they need to shut it up.  I definitely fell into that 
when porting some 32-bit code that was sloppy with size_t vs. int to 
64-bit.  I knew there was no way it was going to be a problem, because 
there was no way any of my arrays were going to be even within a few 
orders of magnitude of int.max, but the compiler insisted on nagging me 
about it and I reflexively just put in casts everywhere.  A warning 
_may_ be appropriate, but definitely not an error.


More information about the Digitalmars-d mailing list