std.concurrency wrapper over MPI?

dsimcha dsimcha at yahoo.com
Sat Aug 6 07:09:49 PDT 2011


On 8/6/2011 2:57 AM, Russel Winder wrote:
>
> The main problem here is going to be that when anything gets released
> performance will be the only yardstick by which things are measured.
> Simplicity of code, ease of evolution of code, all the things
> professional developers value, will go out of the window.  It's HPC
> after all :-)

This is why, even though I do stuff that's arguably HPC, I can't stand 
the HPC community.  Of course performance is important, but nothing 
should be so sacred as to be completely immune to tradeoffs.  The thing 
that drew me to D is that you can get pretty good performance out of it 
without sacrificing that much ease of use compared to dynamic languages. 
  Besides, you can always provide a high-level but not-that-efficient 
API for most cases and a lower-level API for when more control is needed.

Anyhow, D has one key advantage that makes it more tolerant of 
communication overhead than most languages:  std.parallelism.  At least 
the way things are set up on the cluster here at Johns Hopkins, each 
node has 8 cores.  The "traditional" MPI way of doing things is 
apparently to allocate 8 MPI processes per node in this case, one per 
core.  Instead, I'm allocating one process per node, using MPI only for 
very coarse grained parallelism and using std.parallelism for more 
fine-grained parallelism to keep all 8 cores occupied with one MPI process.


More information about the Digitalmars-d mailing list