The problem that took him 5 years to fix in C++, I solved in a minute with D

Bruce Carneal bcarneal at gmail.com
Mon Mar 14 03:51:27 UTC 2022


On Monday, 14 March 2022 at 00:42:34 UTC, Paulo Pinto wrote:
> On Friday, 11 March 2022 at 09:37:41 UTC, Bruce Carneal wrote:
>> On Friday, 11 March 2022 at 06:41:37 UTC, Paulo Pinto wrote:
...
>> I've not tried Csharp on GPUs, hybridizer or otherwise.  Was 
>> the GPU meta programming capability there not too "clunky"?  
>> How did your performance compare to well crafted C++/CUDA 
>> implementations?
>
> I guess it is good enough to keep their business going, 
> http://www.altimesh.com/

Indeed.  Business success with a language pretty much closes out 
the "is the language good enough" argument (for successful dlang 
companies as well of course).

Still, I thought we were addressing comparative benefit rather 
than "good enough".

Concretely, I prefer dcompute to my former GPU development 
language, C++/CUDA: quicker development, more readable, easier to 
make non-trivial kernels "live in registers", better launch API, 
...

Since I've not used C# at all I'd like to hear from those who 
have.  If you know of any C# GPU programmers with reasoned 
opinions about dlang/dcompute, good or bad, please ask them to 
comment.



More information about the Digitalmars-d mailing list