Differences in results when using the same function in CTFE and Runtime
Timon Gehr
timon.gehr at gmx.ch
Thu Aug 15 16:56:23 UTC 2024
On 8/15/24 18:50, Abdulhaq wrote:
> On Thursday, 15 August 2024 at 16:21:35 UTC, Abdulhaq wrote:
>> On Thursday, 15 August 2024 at 09:13:31 UTC, Carsten Schlote
>
> To clarify a bit more, I'm not just talking about single isolated
> computations, I'm talking about e.g. matrix multiplication. Different
> compilers, even LDC vs DMD for example, could optimise the calculation
> in a different way, loop unrolling, step elimination etc. even if the
> rounding algorithms etc. at the chip level are the same, the way the
> code is compiled and calculations sequenced will change the error in the
> final answer.
> ...
LDC disables -ffast-math by default.
> Then, variations in pipelining and caching at the processor level could
> also affect the answer.
> ...
No.
> And if you move on to different computing paradigms such as quantum
> computing and other as yet undiscovered techniques, again the way
> operations and rounding etc is compounded will cause divergences in
> computations.
> ...
Yes, if you move on to an analog computing paradigm with imperfect error
correction, full reproducibility will go out the window. Floating point
is not that though.
> Now, we could insist that we somehow legislate for the way compound
> calculations are conducted. But that would cripple the speed of
> calculations for some processor architecture / paradigms for a goal
> (reproduceability) which is worthy, but for 99% of usages not
> sufficiently beneficial to pay the big price in performance.
>
>
It's really not that expensive, changing the result via optimizations is
disabled by default in LDC, and actually, how do you know that the
compiler does not pessimize your hand-optimized compound operations.
I am not even against people being able to pass -ffast-math, it should
just not destroy the correctness and reproducibility of everyone else's
computations.
More information about the Digitalmars-d
mailing list