Accuracy of floating point calculations

H. S. Teoh hsteoh at
Wed Oct 30 15:12:29 UTC 2019

On Wed, Oct 30, 2019 at 09:03:49AM +0100, Robert M. Münch via Digitalmars-d-learn wrote:
> On 2019-10-29 17:43:47 +0000, H. S. Teoh said:
> > On Tue, Oct 29, 2019 at 04:54:23PM +0000, ixid via Digitalmars-d-learn wrote:
> > > On Tuesday, 29 October 2019 at 16:11:45 UTC, Daniel Kozak wrote:
> > > > On Tue, Oct 29, 2019 at 5:09 PM Daniel Kozak <kozzi11 at> wrote:
> > > > 
> > > > AFAIK dmd use real for floating point operations instead of
> > > > double
> > > 
> > > Given x87 is deprecated and has been recommended against since
> > > 2003 at the latest it's hard to understand why this could be seen
> > > as a good idea.
> > 
> > Walter talked about this recently as one of the "misses" in D (one
> > of the things he predicted wrongly when he designed it).
> Why should the real type be a wrong decision? Maybe the code
> generation should be optimized if all terms are double to avoid x87
> but overall more precision is better for some use-cases.

It wasn't a wrong *decision* per se, but a wrong *prediction* of where
the industry would be headed.  Walter was expecting that people would
move towards higher precision, but what with SSE2 and other such trends,
and the general neglect of x87 in hardware developments, it appears that
people have been moving towards 64-bit doubles rather than 80-bit

Though TBH, my opinion is that it's not so much neglecting higher
precision, but a general sentiment of the recent years towards
standardization, i.e., to be IEEE-compliant (64-bit floating point)
rather than work with a non-standard format (80-bit x87 reals).  I also
would prefer to have higher precision, but it would be nicer if that
higher precision was a standard format with guaranteed semantics that
isn't dependent upon a single vendor or implementation.

> I'm very happpy it exists, and x87 too because our app really needs
> this extended precision. I'm not sure what we would do if we only had
> doubles.
> I'm not aware of any 128 bit real implementations done using SIMD
> instructions which get good speed. Anyone?

Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit
doubles), or do you mean actual IEEE 128-bit reals?  'cos the two are
different, semantically.

I'm still longing for 128-bit reals (i.e., actual IEEE 128-bit format)
to show up in x86, but I'm not holding my breath.  In the meantime, I've
been looking into arbitrary-precision float libraries like libgmp
instead. It's software-simulated, and therefore slower, but for certain
applications where I want very high precision, it's the currently the
only option.


If Java had true garbage collection, most programs would delete themselves upon execution. -- Robert Sewell

More information about the Digitalmars-d-learn mailing list