Has anyone used D with Nvidia's Cuda?

Walter Bright via Digitalmars-d digitalmars-d at puremagic.com
Sat Apr 4 10:01:35 PDT 2015


On 4/4/2015 4:16 AM, ponce wrote:
> Also consider costs: NVIDIA will artificially limit the speed of pinned memory
> transferts to push you to buy expensive $3000 discrete GPUs. They have segmented
> the market to make the most of people performance-starved. It goes to the point
> that you are left with $3000 GPUs that are slower than $300 ones, just to get
> the right driver. Hopefully the market will correct them after so much milking.

The only thing I can add to that is the people who really want performance will 
be more than willing to buy the GPU to do it and the $3000 means nothing to 
them. I.e. people to whom microseconds means money, such as trading software.

I don't want to leave any tern unstoned.

Also, it seems that we are 95% there in supporting CUDA already. thanks to your 
header work. Just need to write some examples to make sure it works, and write a 
few pages of "how to do it". Once that is done, we can approach Nvidia and get 
them to mention on their site that D supports CUDA. Nvidia is really pushing 
CUDA, and it will be of mutual benefit for them to promote D and us to support CUDA.


More information about the Digitalmars-d mailing list