Scientific computing and parallel computing C++23/C++26
Bruce Carneal
bcarneal at gmail.com
Thu Jan 13 16:17:13 UTC 2022
On Thursday, 13 January 2022 at 14:50:59 UTC, bachmeier wrote:
> On Thursday, 13 January 2022 at 07:23:40 UTC, Bruce Carneal
> wrote:
>> On Thursday, 13 January 2022 at 03:56:00 UTC, bachmeier wrote:
>>> On Wednesday, 12 January 2022 at 22:50:38 UTC, Ola Fosheim
>>> Grøstad wrote:
>>>
>>>> My gut feeling is that it will be very difficult for other
>>>> languages to stand up to C++, Python and Julia in parallel
>>>> computing. I get a feeling that the distance will only
>>>> increase as time goes on.
>>>>
>>>> What do you think?
>>>
>>> It doesn't matter all that much for D TBH. Without the basic
>>> infrastructure for scientific computing like you get out of
>>> the box with those three languages, the ability to target
>>> another platform isn't going to matter. There are lots of
>>> pieces here and there in our community, but it's going to
>>> take some effort to (a) make it easy to use the different
>>> parts together, (b) document everything, and (c) write the
>>> missing pieces.
>>
>> I disagree. D/dcompute can be used as a better general
>> purpose GPU kernel language now (superior meta programming,
>> sane nested functions, ...). If you are concerned about
>> "infrastructure" you embed in C++.
>
> I was referring to libraries like numpy for Python or the
> numerical capabilities built into Julia. D just isn't in a
> state where a researcher is going to say "let's write a D
> program for that simulation". You can call some things in Mir
> and cobble together an interface to some C libraries or
> whatever. That's not the same as Julia, where you write the
> code you need for the task at hand. That's the starting point
> to make it into scientific computing.
I agree. If the heavy lifting for a new project is accomplished
by libraries that you can't easily co-opt then better to employ D
as the GPU language or not at all.
More broadly, I don't think we should set ourselves a task of
displacing language X in community Y. Better to focus on making
accelerator programming "no big deal" in general so that people
opt-in more often (first as accelerator language sub-component,
then maybe more).
While my present day use of dcompute is in real time video, where
it works a treat, I'm most excited about the possibilities
dcompute would afford on SoCs. World class perf/watt from dead
simple code deployable to billions of units? Yes, please.
>
> ...
More information about the Digitalmars-d
mailing list