Scientific computing and parallel computing C++23/C++26

Nicholas Wilson iamthewilsonator at hotmail.com
Fri Jan 14 00:56:32 UTC 2022


On Thursday, 13 January 2022 at 23:28:01 UTC, Guillaume Piolat 
wrote:
> As a former GPGPU guy: can you explain in what ways dcompute 
> improves life over using CUDA and OpenCL through 
> DerelictCL/DerelictCUDA (I used to maintain them and I think 
> nobody ever used them). Using the API directly seems to offer 
> the most control to me, and no special compiler support.

It is entirely possible to use dcompute as simply a wrapper over 
OpenCL/CUDA and benefit from the enhanced usability that it 
offers (e.g. querying OpenCL API objects for their properties is 
_faaaar_ simpler and less error prone with dcompute) because it 
exposes the underlying API objects 1:1, and you can always get 
the raw pointer and do things manually if you need to. Also 
dcompute uses DerelictCL/DerelictCUDA underneath anyway (thanks 
for them!).

If you're thinking of "special compiler support" as what CUDA 
does with its <<<>>>, then no, dcompute does all of that, but not 
with special help from the compiler, only with what meta 
programming and reflection is available to any other D program.
It's D all the way down to the API calls. Obviously there is 
special compiler support to turn D code into compute kernels.

The main benefit of dcompute is turning kernel launches into type 
safe one-liners, as opposed to brittle, type unsafe, paragraphs 
of code.



More information about the Digitalmars-d mailing list