Accuracy of floating point calculations

H. S. Teoh hsteoh at quickfur.ath.cx
Thu Oct 31 16:07:07 UTC 2019


On Thu, Oct 31, 2019 at 09:52:08AM +0100, Robert M. Münch via Digitalmars-d-learn wrote:
> On 2019-10-30 15:12:29 +0000, H. S. Teoh said:
[...]
> > Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit
> > doubles), or do you mean actual IEEE 128-bit reals?
> 
> Simulated, because HW support is lacking on X86. And PPC is not that
> mainstream. I exect Apple to move to ARM, but never heard about 128-Bit
> support for ARM.

Maybe you might be interested in this:

	https://stackoverflow.com/questions/6769881/emulate-double-using-2-floats

It's mostly talking about simulating 64-bit floats where the hardware
only supports 32-bit floats,  but the same principles apply for
simulating 128-bit floats with 64-bit hardware.


[...]
> > In the meantime, I've been looking into arbitrary-precision float
> > libraries like libgmp instead. It's software-simulated, and
> > therefore slower, but for certain applications where I want very
> > high precision, it's the currently the only option.
> 
> Yes, but it's way too slow for our product.

Fair point.  In my case I'm mainly working with batch-oriented
processing, so a slight slowdown is an acceptable tradeoff for higher
accuracy.


> Maybe one day we need to deliver an FPGA based co-processor PCI card
> that can run 128-Bit based calculations... but that will be a pretty
> hard way to go.
[...]

Maybe switch to PPC? :-D


T

-- 
If you want to solve a problem, you need to address its root cause, not just its symptoms. Otherwise it's like treating cancer with Tylenol...


More information about the Digitalmars-d-learn mailing list