SIMD support...

Paulo Pinto pjmlp at progtools.org
Fri Jan 6 08:00:19 PST 2012


Please don't start a flame war on this, I am just expressing an opinion.

I think that for heterougenous computing we are better of with a language
that supports functional programming concepts.

>From what I have seen in papers, many imperative languages have the issue
that they are too tied to the old homogenous computing model we had on the
desktop. That is the main reason why C and C++ start to look like 
frankenstein
languages with all the extensions companies are adding to them to support 
the
new models.

Funcional languages have the advantage that their hardware model is more 
abstract
and as such can be easier mapped to heterougenous hardware. This is also an 
area
where VM based languages might have some kind of advantage, but I am not 
sure.

Now, D actually has quite a few tools to explore functional concepts, so I 
guess it could
take off in this area if enough HPC people got some interest on it.

Regarding CUDA, you will surely now this better than me. I read somewhere 
that in most
research institutes people only care about CUDA, not OpenCL, because of it 
being older
than OpenCL, the C++ support, available tools, and NVidia card's performance 
when compared
with ATI in this area. But I don't have any experience here, so I don't know 
how much of this is
true.

--
Paulo


"Russel Winder"  wrote in message 
news:mailman.109.1325864213.16222.digitalmars-d at puremagic.com...
On Fri, 2012-01-06 at 16:09 +0100, Paulo Pinto wrote:
> From what I see in HPC conferences papers and webcasts, I think it might 
> be
> already too late for D
> in those scenarios.

Indeed, for core HPC that is true:  if you aren't using Fortran, C, C++,
and Python you are not in the game.  The point is that HPC is really
about using computers that cost a significant proportion of the USA
national debt.  My thinking is that with Intel especially, looking to
use the Moore's Law transistor count mountain to put heterogeneous many
core systems on chip, i.e. arrays of CPUs connected to GPGPUs on chip,
the programming languages used by the majority of programmers not just
those playing with multi-billion dollar kit, will have to be able to
deal with heterogeneous models of computation.   The current model of
separate compilation and loading of CPU code and GPGPU kernel is a hack
to get things working in a world where tool chains are still about
building 1970s single threaded code.  This represents an opportunity for
non C and C++ languages.  Python is beginning to take a stab at trying
to deal with all this.  D would be another good candidate.  Java cannot
be in this game without some serious updating of the JVM semantics -- an
issue we debated a bit on this list a short time ago so non need to
rehearse all the points.

It just strikes me as an opportunity to get D front and centre by having
it provide a better development experience for these heterogeneous
systems that are coming.

Sadly Santa failed to bring me a GPGPU card for Christmas so as to do
experiments using C++, Python, OpenCL (and probably CUDA, though OpenCL
is the industry standard now).  I will though be buying one for myself
in the next couple of weeks.

> "Russel Winder"  wrote in message
> news:mailman.107.1325862128.16222.digitalmars-d at puremagic.com...
> On Fri, 2012-01-06 at 16:35 +0200, Manu wrote:
> [...]
>
> Currently GPGPU is dominated by C and C++ using CUDA (for NVIDIA
> addicts) or OpenCL (for Apple addicts and others).  It would be good if
> D could just take over this market by being able to manage GPU kernels
> easily.  The risk is that PyCUDA and PyOpenCL beat D to market
> leadership.
>

-- 
Russel.
=============================================================================
Dr Russel Winder      t: +44 20 7585 2200   voip: 
sip:russel.winder at ekiga.net
41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel at russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder 



More information about the Digitalmars-d mailing list