using DCompute

Nicholas Wilson via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Thu Jul 27 17:23:35 PDT 2017


On Thursday, 27 July 2017 at 21:33:29 UTC, James Dean wrote:
> I'm interested in trying it out, says it's just for ldc. Can we 
> simply compile it using ldc then import it and use dmd, ldc, or 
> gdc afterwards?

The ability to write kernels is limited to LDC, though there is 
no practical reason that, once compiled, you couldn't use 
resulting generated files with GDC or DMD (as long as the 
mangling matches, which it should). This is not a priority to get 
working, since the assumption is if you're trying to use the GPU 
to boost your computing power, then you like care enough to use 
LDC, as opposed to DMD (GDC is still a bit behind DMD so I don't 
consider it) to get good optimisations in the first place.

> ---
> a SPIRV capable LLVM (available here to build ldc to to support 
> SPIRV (required for OpenCL)).
> or LDC built with any LLVM 3.9.1 or greater that has the NVPTX 
> backend enabled, to support CUDA.
> ---
>
> Is the LDC from the download pages have these enabled?

I dont think so, although future releases will likely have the 
NVPTX backend enabled.

> Also, can DCompute or any GPU stuff efficiently render stuff 
> because it is already on the GPU or does one sort of have to 
> jump through hoops to, say, render a buffer?

There are memory sharing extensions that allow you to give access 
to and from OpenGL/DirectX so you shouldn't suffer a perf penalty 
for doing so.

> e.g., suppose I want to compute a 3D mathematical function and 
> visualize it's volume. Do I go in to the GPU, do the compute, 
> back out to cpu, then to the graphics system(opengl/directX) or 
> can I just essentially do it all from the gpu?

there should be no I/O overhead.


More information about the Digitalmars-d-learn mailing list