DIP80: phobos additions

Ilya Yaroshenko via Digitalmars-d digitalmars-d at puremagic.com
Thu Jun 11 22:22:31 PDT 2015


On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:
> On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d
> <digitalmars-d at puremagic.com> wrote:
>> On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:
>>>
>>> On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
>>> <digitalmars-d at puremagic.com> wrote:
>>>>
>>>>
>>>>> I believe that Phobos must support some common methods of 
>>>>> linear algebra
>>>>> and general mathematics. I have no desire to join D with 
>>>>> Fortran
>>>>> libraries
>>>>> :)
>>>>
>>>>
>>>>
>>>> D definitely needs BLAS API support for matrix 
>>>> multiplication. Best BLAS
>>>> libraries are written in assembler like openBLAS. Otherwise 
>>>> D will have
>>>> last
>>>> position in corresponding math benchmarks.
>>>
>>>
>>> A complication for linear algebra (or other mathsy things in 
>>> general)
>>> is the inability to detect and implement compound operations.
>>> We don't declare mathematical operators to be algebraic 
>>> operations,
>>> which I think is a lost opportunity.
>>> If we defined the properties along with their properties
>>> (commutativity, transitivity, invertibility, etc), then the 
>>> compiler
>>> could potentially do an algebraic simplification on 
>>> expressions before
>>> performing codegen and optimisation.
>>> There are a lot of situations where the optimiser can't 
>>> simplify
>>> expressions because it runs into an arbitrary function call, 
>>> and I've
>>> never seen an optimiser that understands exp/log/roots, etc, 
>>> to the
>>> point where it can reduce those expressions properly. To 
>>> compete with
>>> maths benchmarks, we need some means to simplify expressions 
>>> properly.
>>
>>
>> Simplified expressions would [NOT] help because
>> 1. On matrix (hight) level optimisation can be done very well 
>> by programer
>> (algorithms with matrixes in terms of count of matrix 
>> multiplications are
>> small).
>
> Perhaps you've never worked with incompetent programmers (in my
> experience, >50% of the professional workforce).
> Programmers, on average, don't know maths. They literally have 
> no idea
> how to simplify an algebraic expression.
> I think there are about 3-4 (being generous!) people in my 
> office (of
> 30-40) that could do it properly, and without spending heaps of 
> time
> on it.
>
>> 2. Low level optimisation requires specific CPU/Cache 
>> optimisation. Modern
>> implementations are optimised for all cache levels. See work 
>> by KAZUSHIGE
>> GOTO
>> http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
>
> Low-level optimisation is a sliding scale, not a binary 
> position.
> Reaching 'optimal' state definitely requires careful 
> consideration of
> all the details you refer to, but there are a lot of 
> improvements that
> can be gained from quickly written code without full low-level
> optimisation. A lot of basic low-level optimisations (like just 
> using
> appropriate opcodes, or eliding redundant operations; ie, 
> squares
> followed by sqrt) can't be applied without first simplifying
> expressions.

OK, generally you are talking about something we can name MathD. 
I understand the reasons. However I am strictly against algebraic 
operations (or eliding redundant operations for floating points) 
for basic routines in system programming language. Even 
float/double internal conversion to real in math expressions is a 
huge headache when math algorithms are implemented (see first two 
comments at 
https://github.com/D-Programming-Language/phobos/pull/2991 ). In 
system PL sqrt(x)^2  should compiles as is.

Such optimisations can be implemented over the basic routines 
(pow, sqrt, gemv, gemm, etc). We can use approach similar to D 
compile time regexp.

Best,
Ilya


More information about the Digitalmars-d mailing list