Scientific computing and parallel computing C++23/C++26
Bruce Carneal
bcarneal at gmail.com
Wed Jan 19 16:30:55 UTC 2022
On Wednesday, 19 January 2022 at 14:24:14 UTC, Paulo Pinto wrote:
> On Wednesday, 19 January 2022 at 13:32:37 UTC, Ola Fosheim
> Grøstad wrote:
>> On Wednesday, 19 January 2022 at 12:49:11 UTC, Paulo Pinto
>> wrote:
>>> It also needs to plug into the libraries, IDEs and GPGPU
>>> debuggers available to the community.
>>
>> But the presentation is not only about HPC, but making
>> parallel GPU computing as easy as writing regular C++ code and
>> being able to debug that code on the CPU.
>>
>> I actually think it is sufficient to support Metal and Vulkan
>> for this to be of value. The question is how much more
>> performance Nvidia manage to get out of their their nvc++
>> compiler for regular GPUs in comparison to a Vulkan solution.
>
> Currently Vulkan Compute is not to be taken seriously.
For those wishing to deploy today, I agree, but it should be
considered for future deployments. That said, it's just one way
for dcompute to tie in. My current dcompute work comes in, for
example, via PTX-jit courtesy of an Nvidia driver.
>
> Yes, the end goal of the industry efforts is that C++ will be
> the lingua franca of GPGPUs and FPGAs, that is why SYSCL is
> collaborating with ISO C++ efforts.
Yes, apparently there's a huge amount of time/money being spent
on SYCL. We can co-opt much of that work underneath (the
upcoming LLVM SPIR-V backend, debuggers, profilers, some libs)
and provide a much better language on top. C++/SYCL is, to put
it charitably, cumbersome.
>
> As for HPC, that is where the money for these kind of efforts
> comes from.
Perhaps, but I suspect other market segments will be (already
are?) more important going forward. Gaming generally and ML on
SoCs comes to mind.
More information about the Digitalmars-d
mailing list