DIP80: phobos additions

Manu via Digitalmars-d digitalmars-d at puremagic.com
Fri Jun 12 04:00:10 PDT 2015


On 12 June 2015 at 15:22, Ilya Yaroshenko via Digitalmars-d
<digitalmars-d at puremagic.com> wrote:
> On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:
>>
>> On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d
>> <digitalmars-d at puremagic.com> wrote:
>>>
>>> On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
>>>>
>>>>
>>>> On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
>>>> <digitalmars-d at puremagic.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>>> I believe that Phobos must support some common methods of linear
>>>>>> algebra
>>>>>> and general mathematics. I have no desire to join D with Fortran
>>>>>> libraries
>>>>>> :)
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> D definitely needs BLAS API support for matrix multiplication. Best
>>>>> BLAS
>>>>> libraries are written in assembler like openBLAS. Otherwise D will have
>>>>> last
>>>>> position in corresponding math benchmarks.
>>>>
>>>>
>>>>
>>>> A complication for linear algebra (or other mathsy things in general)
>>>> is the inability to detect and implement compound operations.
>>>> We don't declare mathematical operators to be algebraic operations,
>>>> which I think is a lost opportunity.
>>>> If we defined the properties along with their properties
>>>> (commutativity, transitivity, invertibility, etc), then the compiler
>>>> could potentially do an algebraic simplification on expressions before
>>>> performing codegen and optimisation.
>>>> There are a lot of situations where the optimiser can't simplify
>>>> expressions because it runs into an arbitrary function call, and I've
>>>> never seen an optimiser that understands exp/log/roots, etc, to the
>>>> point where it can reduce those expressions properly. To compete with
>>>> maths benchmarks, we need some means to simplify expressions properly.
>>>
>>>
>>>
>>> Simplified expressions would [NOT] help because
>>> 1. On matrix (hight) level optimisation can be done very well by
>>> programer
>>> (algorithms with matrixes in terms of count of matrix multiplications are
>>> small).
>>
>>
>> Perhaps you've never worked with incompetent programmers (in my
>> experience, >50% of the professional workforce).
>> Programmers, on average, don't know maths. They literally have no idea
>> how to simplify an algebraic expression.
>> I think there are about 3-4 (being generous!) people in my office (of
>> 30-40) that could do it properly, and without spending heaps of time
>> on it.
>>
>>> 2. Low level optimisation requires specific CPU/Cache optimisation.
>>> Modern
>>> implementations are optimised for all cache levels. See work by KAZUSHIGE
>>> GOTO
>>> http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
>>
>>
>> Low-level optimisation is a sliding scale, not a binary position.
>> Reaching 'optimal' state definitely requires careful consideration of
>> all the details you refer to, but there are a lot of improvements that
>> can be gained from quickly written code without full low-level
>> optimisation. A lot of basic low-level optimisations (like just using
>> appropriate opcodes, or eliding redundant operations; ie, squares
>> followed by sqrt) can't be applied without first simplifying
>> expressions.
>
>
> OK, generally you are talking about something we can name MathD. I
> understand the reasons. However I am strictly against algebraic operations
> (or eliding redundant operations for floating points) for basic routines in
> system programming language.

That's nice... I'm all for it :)

Perhaps if there were some distinction between a base type and an
algebraic type?
I wonder if it would be possible to express an algebraic expression
like a lazy range, and then capture the expression at the end and
simplify it with some fancy template...
I'd call that an abomination, but it might be possible. Hopefully
nobody in their right mind would ever use that ;)

> Even float/double internal conversion to real
> in math expressions is a huge headache when math algorithms are implemented
> (see first two comments at
> https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL
> sqrt(x)^2  should compiles as is.

Yeah... unless you -fast-math, in which case I want the compiler to do
whatever it can.
Incidentally, I don't think I've ever run into a case in practise
where precision was lost by doing _less_ operations.

> Such optimisations can be implemented over the basic routines (pow, sqrt,
> gemv, gemm, etc). We can use approach similar to D compile time regexp.

Not really. The main trouble is that many of these patterns only
emerge when inlining is performed.
It would be particularly awkward to express such expressions in some
DSL that spanned across conventional API boundaries.


More information about the Digitalmars-d mailing list