Scientific computing and parallel computing C++23/C++26

Ola Fosheim Grøstad ola.fosheim.grostad at gmail.com
Thu Jan 20 19:57:54 UTC 2022


On Thursday, 20 January 2022 at 17:43:22 UTC, Bruce Carneal wrote:
> It's possible, for instance, that you can *know*, from first 
> principles, that you'll never meet objective X if forced to use 
> platform Y.  In general, though, you'll just have a sense of 
> the order in which things should be evaluated.

This doesn't change the desire to do performance testing at 
install or bootup IMO. Even a "narrow" platform like Mac is quite 
broad at this point. PCs are even broader.


> Yes, SIMD can be the better performance choice sometimes.  I 
> think that many people will choose to do a SIMD implementation 
> as a performance, correctness testing and portability baseline 
> regardless of the accelerator possibilities.

My understanding is that the presentation Bryce made suggested 
that you would just write "fairly normal" C++ code and let the 
compiler generate CPU or GPU instructions transparently, so you 
should not have to write SIMD code. SIMD would be the fallback 
option.

I think that the point of having parallel support built into the 
language is not to get the absolute maximum performance, but to 
make writing more performant code more accessible and cheaper.

If you end up having to handwrite SIMD to get decent performance 
then that pretty much makes parallel support a fringe feature. 
E.g. it won't be of much use outside HPC with expensive equipment.

So in my mind this feature does require hardware vendors to focus 
on CPU/GPU integration, and it also requires a rather 
"intelligent" compiler and runtime setup in order to pay for the 
debts of the "abstraction overhead".

I don't think just translating a language AST to an existing 
shared backend will be sufficient. If that was sufficient Nvidia 
wouldn't need to invest in nvc++?

But, it remains to be seen who will pull this off, besides Nvidia.



More information about the Digitalmars-d mailing list