DIP80: phobos additions

Ilya Yaroshenko via Digitalmars-d digitalmars-d at puremagic.com
Fri Jun 12 07:08:54 PDT 2015


On Friday, 12 June 2015 at 11:00:20 UTC, Manu wrote:
>>>
>>> Low-level optimisation is a sliding scale, not a binary 
>>> position.
>>> Reaching 'optimal' state definitely requires careful 
>>> consideration of
>>> all the details you refer to, but there are a lot of 
>>> improvements that
>>> can be gained from quickly written code without full low-level
>>> optimisation. A lot of basic low-level optimisations (like 
>>> just using
>>> appropriate opcodes, or eliding redundant operations; ie, 
>>> squares
>>> followed by sqrt) can't be applied without first simplifying
>>> expressions.
>>
>>
>> OK, generally you are talking about something we can name 
>> MathD. I
>> understand the reasons. However I am strictly against 
>> algebraic operations
>> (or eliding redundant operations for floating points) for 
>> basic routines in
>> system programming language.
>
> That's nice... I'm all for it :)
>
> Perhaps if there were some distinction between a base type and 
> an
> algebraic type?
> I wonder if it would be possible to express an algebraic 
> expression
> like a lazy range, and then capture the expression at the end 
> and
> simplify it with some fancy template...
> I'd call that an abomination, but it might be possible. 
> Hopefully
> nobody in their right mind would ever use that ;)

... for example we can optimise matrix chain multiplication 
https://en.wikipedia.org/wiki/Matrix_chain_multiplication
----
//calls `this(MatrixExp!double chain)`
Matrix!double = m1*m2*m3*m4;
----

>> Even float/double internal conversion to real
>> in math expressions is a huge headache when math algorithms 
>> are implemented
>> (see first two comments at
>> https://github.com/D-Programming-Language/phobos/pull/2991 ). 
>> In system PL
>> sqrt(x)^2  should compiles as is.
>
> Yeah... unless you -fast-math, in which case I want the 
> compiler to do
> whatever it can.
> Incidentally, I don't think I've ever run into a case in 
> practise
> where precision was lost by doing _less_ operations.

Mathematics functions requires concrete order of operations
http://www.netlib.org/cephes/  (std.mathspecial and a bit of 
std.math/std.numeric are based on cephes).

>> Such optimisations can be implemented over the basic routines 
>> (pow, sqrt,
>> gemv, gemm, etc). We can use approach similar to D compile 
>> time regexp.
>
> Not really. The main trouble is that many of these patterns only
> emerge when inlining is performed.
> It would be particularly awkward to express such expressions in 
> some
> DSL that spanned across conventional API boundaries.

If I am not wrong in both LLVM and GCC `fast-math` attribute can 
be defined for functions. This feature can be implemented in D.


More information about the Digitalmars-d mailing list