Floating point differences at compile-time
bearophile
bearophileHUGS at lycos.com
Thu Dec 31 10:30:51 PST 2009
I don't understand where the result differences come from in this code:
import std.math: sqrt, PI;
import std.stdio: writefln;
void main() {
const double x1 = 3.0 * PI / 16.0;
writefln("%.17f", sqrt(1.0 / x1));
double x2 = 3.0 * PI / 16.0;
writefln("%.17f", sqrt(1.0 / x2));
real x3 = 3.0 * PI / 16.0;
writefln("%.17f", sqrt(1.0 / x3));
real x4 = 3.0L * PI / 16.0L;
writefln("%.17f", sqrt(1.0L / x4));
}
Output with various D compilers:
DMD1:
1.30294003174111994
1.30294003174111972
1.30294003174111979
1.30294003174111979
DMD2:
1.30294003174111972
1.30294003174111972
1.30294003174111979
1.30294003174111979
LDC:
1.30294003174111994
1.30294003174111994
1.30294003174111972
1.30294003174111972
I'd like the compiler(s) to give more deterministic results here.
Bye,
bearophile
More information about the Digitalmars-d-learn
mailing list