Scientific computing and parallel computing C++23/C++26

Nicholas Wilson iamthewilsonator at hotmail.com
Fri Jan 21 03:23:59 UTC 2022


On Thursday, 20 January 2022 at 08:36:32 UTC, Ola Fosheim Grøstad 
wrote:
> Yes, so why do you need compile time features?
>
> My understanding is that the goal of nvc++ is to compile to CPU 
> or GPU based on what pays of more for the actual code. So it 
> will not need any annotations (it is up to the compiler to 
> choose between CPU/GPU?). Bryce suggested that it currently 
> only targets one specific GPU, but that it will target multiple 
> GPUs for the same executable in the future.

There are two major advantages for compile time features, for the 
host and for the device (e.g. GPU).

On the host side, D meta programming allows DCompute to do what 
CUDA does with its <<<>>> kernel launch syntax, in terms of type 
safety and convenience, with regular D code. This is the feature 
that makes CUDA nice to use and OpenCL's lack of such a feature 
quite horrible to use, and change of kernel signature a 
refactoring unto itself.

On the device side, I'm sure Bruce can give you some concrete 
examples.

> The goal for C++ parallelism is to make it fairly transparent 
> to the programmer. Or did I misunderstand what he said?

You want it to be transparent, not invisible.

>> Same caveats apply for metal (should be pretty easy to do: 
>> need Objective-C support in LDC, need Metal bindings).
>
> Use clang to compile the objective-c code to object files and 
> link with it?

Wont work, D needs to be able to call the objective-c.
I mean you could use a C or C++ shim, but that would be pretty 
ugly.



More information about the Digitalmars-d mailing list