Why don't we switch to C like floating pointed arithmetic instead of automatic expansion to reals?

Johannes Pfau via Digitalmars-d digitalmars-d at puremagic.com
Sat Aug 6 05:09:45 PDT 2016


Am Sat, 6 Aug 2016 02:29:50 -0700
schrieb Walter Bright <newshound2 at digitalmars.com>:

> On 8/6/2016 1:21 AM, Ilya Yaroshenko wrote:
> > On Friday, 5 August 2016 at 20:53:42 UTC, Walter Bright wrote:
> >  
> >> I agree that the typical summation algorithm suffers from double
> >> rounding. But that's one algorithm. I would appreciate if you
> >> would review
> >> http://dlang.org/phobos/std_algorithm_iteration.html#sum to ensure
> >> it doesn't have this problem, and if it does, how we can fix it. 
> >
> > Phobos's sum is two different algorithms. Pairwise summation for
> > Random Access Ranges and Kahan summation for Input Ranges. Pairwise
> > summation does not require IEEE rounding, but Kahan summation
> > requires it.
> >
> > The problem with real world example is that it depends on
> > optimisation. For example, if all temporary values are rounded,
> > this is not a problem, and if all temporary values are not rounded
> > this is not a problem too. However if some of them rounded and
> > others are not, than this will break Kahan algorithm.
> >
> > Kahan is the shortest and one of the slowest (comparing with KBN
> > for example) summation algorithms. The true story about Kahan, that
> > we may have it in Phobos, but we can use pairwise summation for
> > Input Ranges without random access, and it will be faster then
> > Kahan. So we don't need Kahan for current API at all.
> >
> > Mir has both Kahan, which works with 32-bit DMD, and pairwise,
> > witch works with input ranges.
> >
> > Kahan, KBN, KB2, and Precise summations is always use `real` or
> > `Complex!real` internal values for 32 bit X86 target. The only
> > problem with Precise summation, if we need precise result in double
> > and use real for internal summation, then the last bit will be
> > wrong in the 50% of cases.
> >
> > Another good point about Mir's summation algorithms, that they are
> > Output Ranges. This means they can be used effectively to sum
> > multidimensional arrays for example. Also, Precise summator may be
> > used to compute exact sum of distributed data.
> >
> > When we get a decision and solution for rounding problem, I will
> > make PR for std.experimental.numeric.sum.
> >  
> >> I hear you. I'd like to explore ways of solving it. Got any
> >> ideas?  
> >
> > We need to take the overall picture.
> >
> > It is very important to recognise that D core team is small and D
> > community is not large enough now to involve a lot of new
> > professionals. This means that time of existing one engineers is
> > very important for D and the most important engineer for D is you,
> > Walter.
> >
> > In the same time we need to move forward fast with language changes
> > and druntime changes (GC-less Fibers for example).
> >
> > So, we need to choose tricky options for development. The most
> > important option for D in the science context is to split D
> > Programming Language from DMD in our minds. I am not asking to
> > remove DMD as reference compiler. Instead of that, we can introduce
> > changes in D that can not be optimally implemented in DMD (because
> > you have a lot of more important things to do for D instead of
> > optimisation) but will be awesome for our LLVM-based or GCC-based
> > backends.
> >
> > We need 2 new pragmas with the same syntax as `pragma(inline, xxx)`:
> >
> > 1. `pragma(fusedMath)` allows fused mul-add, mul-sub, div-add,
> > div-sub operations. 2. `pragma(fastMath)` equivalents to [1]. This
> > pragma can be used to allow extended precision.
> >
> > This should be 2 separate pragmas. The second one may assume the
> > first one.
> >
> > Recent LDC beta has @fastmath attribute for functions, and it is
> > already used in Phobos ndslice.algorithm PR and its Mir's mirror.
> > Attributes are alternative for pragmas, but their syntax should be
> > extended, see [2]
> >
> > The old approach is separate compilation, but it is weird, low
> > level for users, and requires significant efforts for both small
> > and large projects.
> >
> > [1] http://llvm.org/docs/LangRef.html#fast-math-flags
> > [2] https://github.com/ldc-developers/ldc/issues/1669  
> 
> Thanks for your help with this.
> 
> Using attributes for this is a mistake. Attributes affect the
> interface to a function

This is not true for UDAs. LDC and GDC actually implement @attribute
as an UDA. And UDAs used in serialization interfaces, the std.benchmark
proposals, ... do not affect the interface either.

> not its internal implementation.

It's possible to reflect on the UDAs of the current function, so this
is not true in general:
-----------------------------
@(40) int foo()
{
    mixin("alias thisFunc = " ~ __FUNCTION__ ~ ";");
    return __traits(getAttributes, thisFunc)[0];
}
-----------------------------
https://dpaste.dzfl.pl/aa0615b40adf

I think this restriction is also quite arbitrary. For end users
attributes provide a much nicer syntax than pragmas. Both GDC and LDC
already successfully use UDAs for function specific backend options, so
DMD is really the exception here.

Additionally, even according to your rules pragma(mangle) should
actually be @mangle.


More information about the Digitalmars-d mailing list