Scientific computing and parallel computing C++23/C++26

Bruce Carneal bcarneal at gmail.com
Fri Jan 14 16:57:21 UTC 2022


On Friday, 14 January 2022 at 15:17:59 UTC, Ola Fosheim Grøstad 
wrote:
> On Friday, 14 January 2022 at 01:39:32 UTC, Nicholas Wilson 
> wrote:
>> On Thursday, 13 January 2022 at 22:27:27 UTC, Ola Fosheim 
>> Grøstad wrote:
>...
> The presentation by Bryce was quite explicitly focusing on 
> making GPU computation available at the same level as CPU 
> computations (sans function pointers). This should be possible 
> for homogeneous memory systems (GPU and CPU sharing the same 
> memory bus) in a rather transparent manner and languages that 
> plan for this might be perceived as being much more productive 
> and performant if/when this becomes reality. And C++23 isn't 
> far away, if they make the deadline.

Yes.  Homogeneous memory accelerators, as found today in game 
consoles and SoCs, open up some nice possibilities.  Scheduling 
could still be problematic with a centralized resource (unlike 
per-core SIMD).  Distinct instruction formats (GPU vs CPU) also 
present a challenge to achieving an it-just-works "sans function 
pointers" level of integration.  Surmountable, but a little work 
to do there.

I'm hopeful that SoCs, with their relatively friendlier 
accelerator configurations, will be the economic enabler for 
widespread uptake of dcompute.  World beating perf/watt from very 
readable code deployable on billions of units?  I'm up for that!




More information about the Digitalmars-d mailing list