The future of concurrent programming

Pragma ericanderton at yahoo.removeme.com
Tue May 29 07:33:20 PDT 2007


Henrik wrote:
> Todays rant on Slashdot is about parallel programming and why the support for multiple cores in programs is only rarely seen. There are a lot of different opinions on why we haven’t seen a veritable rush to adopt parallelized programming strategies, some which include:
> 
> * Multiple cores haven't been available/affordable all that long, programmers just need some time to catch up.
> * Parallel programming is hard to do (as we lack the proper programming tools for it). We need new concepts, new tools, or simply a new generation of programming languages created to handle parallelization from start.
> * Parallel programming is hard to do (as we tend to think in straight lines, lacking the proper cognitive faculties to parallelize problem solving). We must accept that this is an inherently difficult thing for us, and that there never will be an easy solution.
> * We have both the programming tools needed and the cognitive capacity to deal with them, only the stupidity of the current crop of programmers or their inability to adapt stand in the way. Wait a generation and the situation will have sorted itself out.
> 
> I know concurrent programming has been a frequent topic in the D community forums, so I would be interested to hear the community’s opinions on this. What will the future of parallel programming look like? 
> Are new concepts and tools that support parallel programming needed, or just a new way of thinking? Will the “old school” programming languages fade away, as some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional Languages)? Where will/should D be in all this? Is it a doomed language if it does incorporate an efficient way of dealing with this (natively)?
> 
> 
> Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
> 
> 
> /// Henrik
> 

The way I've often thought of it is that we're lacking the higher-level constructs needed to take advantage of what 
modern processors have to offer.  My apologies for not offering an exact solution, but rather my feelings on the matter. 
  Who knows, maybe someone already has a syntax for what I'm attempting to describe?

I liken the problem to the way that OOP redefined how we build large scale systems.  The change was so profound that it 
would be difficult and cumbersome to use a purely free-function design past a certain degree of complexity.  Likewise, 
with parallelism, we're still kind of at the free-function level with semaphores, mutexes and threads.  Concepts like 
"transactional memory" are on the right path, but there's more to it than that.

What is needed is something "higher level" that is easily grokked by the programmer, yet just as optimizable by the 
compiler.  Something like a "MT package definition" that allows us to bind code and data to a certain heap, processor, 
thread priority or whatever, so that parallelism happens in a controlled yet abstract way.  Kind of like what the GC has 
done for eliminating calls to delete()/free(), such a scheme should free our hands and minds in a similar way.

The overall idea I have is to whisper to the compiler about the kinds of things we'd like to see, instead of working 
with so much minutia all the time.  Let the compiler worry about how to cross heap boundaries and insert 
semaphores/mutexes/queues/whatever when contexts mix; it's make-work, and error prone stuff, which is what the compiler 
is for.  Now while you could do this stuff with compiler options, I think we need to be more far more expressive than 
"-optimize-the-hell-out-of-it-for-MT"; it needs to be in the language itself.

That way you can say things like "these modules are on a transactional heap, for at most 2 processors" and "these 
modules must have their own heap, and can use n processors", all within the same program.  At the same time, you could 
also say "parallelize this foreach statement", "single-thread this array operation", or "move this instance into package 
Foo's heap (whatever that is)".  The idea is to say what we really want done, and trust the compiler (and runtime 
library complete with multi-heap support and process/thread scheduling) to do it for us.

Sure, you'd loose a lot of fine-grained control with such an approach, but as new processors are produced with 
exponentially more cores than the generation before it, we're going to yearn for something more sledgehammer-like.

-- 
- EricAnderton at yahoo



More information about the Digitalmars-d mailing list