Scientific computing and parallel computing C++23/C++26

Bruce Carneal bcarneal at gmail.com
Thu Jan 13 18:41:54 UTC 2022


On Thursday, 13 January 2022 at 16:31:11 UTC, Tejas wrote:
> On Thursday, 13 January 2022 at 14:24:59 UTC, Bruce Carneal 
> wrote:
>
>> Yes.  The language independent work in LLVM in the accelerator 
>> area is hugely important for dcompute, essential.
>
> Sorry if this sounds ignorant, but does SPIR-V count for 
> nothing?

SPIR-V is *very* useful.  It is the catalyst and focal point of 
some of the most important ongoing LLVM accelerator work.  
Nicholas and I both believe that that work could provide a much 
more robust intermediate target for dcompute once it hits release 
status.

>
>
>>  Gotta surf that wave as we don't have the manpower to go 
>> independent.  I dont think *anybody* has that amount of 
>> manpower, hence the collaboration/consolidation around LLVM as 
>> a back-end for accelerators.
>>
>>>
>>> There was a time to try overthrow C++, that was 10 years ago, 
>>> LLVM was hardly relevant and GPGPU computing still wasn't 
>>> mainstream.
>>
>> Yes.  The "overthrow" of C++ should be a non-goal, IMO, 
>> starting yesterday.
>
> Overthrowing may be hopeless, but I feel we should at least be 
> a really competitive with them.

Sure.  We need to offer something that is actually better, we 
just don't need to be perceived as better by everyone in all 
scenarios.  An example: if management is deathly afraid of 
anything but microscopic incremental development or, more 
charitably, management weighs the risks of new development very 
very heavily, then D is unlikely to be given a chance.

> Because it doesn't matter whether we're competing with C++ or 
> not, people will compare us with it since that's the other 
> choice when people will want to write extremely performant GPU 
> code(if they care about ease of setup and productivity and 
> _not_ performance-at-any-cost, Julia and Python have beat us to 
> it :-(
> )

Yes.  We should evaluate our efforts by comparing (competing) 
with alternatives where available.  D/dcompute is already, for my 
GPU work at least, much better than CUDA/C++.  Concretely: I can 
achieve equivalent or higher performance more quickly with more 
readable code than I could formerly with CUDA/C++.  There are 
some things that are trivial in D kernels (like 
live-in-register/mem-bandwidth-minimized stencil processing) that 
would require "heroic" effort in CUDA/C++.

That said, there are definitely things that we could improve in 
the dcompute/accelerator area, particularly wrt the on-ramp for 
those new to accelerator programming.  But, as you note, D is 
unlikely to be adopted by the "performance is good enough with 
existing solutions" crowd in any case.  That's fine.



More information about the Digitalmars-d mailing list