GPGPUs

John Colvin john.loughran.colvin at gmail.com
Tue Aug 13 09:41:41 PDT 2013


On Tuesday, 13 August 2013 at 16:27:46 UTC, Russel Winder wrote:
> The era of GPGPUs for Bitcoin mining are now over, they moved 
> to ASICs.
> The new market for GPGPUs is likely the banks, and other "Big 
> Data"
> folk. True many of the banks are already doing some GPGPU 
> usage, but it
> is not big as yet. But it is coming.
>
> Most of the banks are either reinforcing their JVM commitment, 
> via
> Scala, or are re-architecting to C++ and Python. True there is 
> some
> C#/F# but it is all for terminals not for strategic computing, 
> and it is
> diminishing (despite what you might hear from .NET oriented 
> training
> companies).
>
> Currently GPGPU tooling means C. OpenCL and CUDA (if you have 
> to) are C
> API for C coding. There are some C++ bindings. There are 
> interesting
> moves afoot with the JVM to enable access to GPGPU from Java, 
> Scala,
> Groovy, etc. but this is years away, which is a longer 
> timescale than
> the opportunity.
>
> Python's offerings, PyOpenCL and PyCUDA are basically ways of 
> managing C
> coded kernels which rather misses the point. I may get involved 
> in
> trying to write an expression language in Python to go with 
> PyOpenCL so
> that kernels can be written in Python – a more ambitious 
> version aimed
> at Groovy is also mooted.
>
> However, D has the opportunity of gaining a bridgehead if a 
> combination
> of D, PyD, QtD and C++ gets to be seen as a viable solid 
> platform for
> development.  The analogue here is the way Java is giving way 
> to Scala
> and Groovy, but in an evolutionary way as things all interwork. 
> The
> opportunity is for D to be seen as the analogue of Scala on the 
> JVM for
> the native code world: a language that interworks well with all 
> the
> other players on the platform but provides more.
>
> The entry point would be if D had a way of creating GPGPU 
> kernels that
> is better than the current C/C++ + tooling.
>
> This email is not a direct proposal to do work, just really an 
> enquiry
> to see if there is any interest in this area.

I'm interested. There may be a significant need for gpu work for 
my PhD, seeing as the amount of data needing to be crunched is a 
bit daunting (dozens of sensors with MHz sampling, with very 
intensive image analysis / computer vision work).
I could farm the whole thing out to cpu nodes, but using the gpu 
nodes would be more fun.


However, I'm insanely busy atm and have next to no experience 
with gpu programming, so I'm probably not gonna be that useful 
for a while!


More information about the Digitalmars-d mailing list