Has anyone used D with Nvidia's Cuda?

Walter Bright via Digitalmars-d digitalmars-d at puremagic.com
Sat Apr 4 03:26:25 PDT 2015


On 4/4/2015 3:04 AM, weaselcat wrote:
> PR?

Exactly!

The idea is that GPUs can greatly accelerate code (2x to 1000x), and if D wants 
to appeal to high performance computing programmers, we need to have a workable 
way to program the GPU.

At this point, it doesn't have to be slick or great, but it has to be doable.

Nvidia appears to have put a lot of effort into CUDA, and it shouldn't be hard 
to work with CUDA given the Derelict D headers, and will give us an answer to D 
users who want to leverage the GPU.

It would also be dazz if someone were to look at std.algorithm and see what 
could be accelerated with GPU code.


More information about the Digitalmars-d mailing list