Scientific computing and parallel computing C++23/C++26
bcarneal at gmail.com
Thu Jan 13 02:16:33 UTC 2022
On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim
> I found the CppCon 2021 presentation
> [C++ Standard
> Parallelism](https://www.youtube.com/watch?v=LW_T2RGXego) by
> Bryce Adelstein Lelbach very interesting, unusually clear and
> filled with content. I like this man. No nonsense.
> It provides a view into what is coming for relatively high
> level and hardware agnostic parallel programming in C++23 or
> C++26. Basically a portable "high level" high performance
> He also mentions the Nvidia C++ compiler *nvc++* which will
> make it possible to compile C++ to Nvidia GPUs in a somewhat
> transparent manner. (Maybe it already does, I have never tried
> to use it.)
> My gut feeling is that it will be very difficult for other
> languages to stand up to C++, Python and Julia in parallel
> computing. I get a feeling that the distance will only increase
> as time goes on.
> What do you think?
Given the emergence of ML in the commercial space and the
prevalence of accelerator HW on SoCs and elsewhere, this is a
timely topic Ola.
We have at least two options: 1) try to mimic or sit atop the,
often byzantine, interfaces that creak out of the C++ community
or 2) go direct to the evolving metal with D meta-programming
shouldering most of the load. I favor the second of course.
For reference, CUDA/C++ was my primary programming language for
5+ years prior to taking up D and, even in its admittedly
less-than-newbie-friendly state, I prefer dcompute to CUDA.
With some additional work dcompute could become a broadly
accessible path to world beating performance/watt libraries and
apps. Code that you can actually understand at a glance when you
pick it up down the road.
Kudos to the dcompute contributors, especially Nicholas.
More information about the Digitalmars-d