Good demo for showing benefits of parallelism

Sean Kelly sean at f4.ca
Sun Jan 28 09:54:40 PST 2007


Kevin Bealer wrote:
> 
> Then the question comes: why (and if) message passing / futures are 
> better than Thread and Mutex.  Herb Sutter argues that it is hard to 
> design correct code using locks and primitives like sleep/pause/mutex, 
> and that it gets a lot harder with larger systems.

I don't think anyone is disagreeing with you here.  CSP is built around 
message passing and was invented in the late 70s.  And IIRC the agent 
model was designed in the early 60s.

> (As I understand it...) Herb's argument is that if I have several 
> modules that use Locks correctly, they often won't when combined. 
> Deadlock (and livelock) avoidance require knowledge of the locking rules 
> for the entire system.  Without such knowledge, it is difficult to do 
> things like lock ordering that prevent deadlocks.  Other techniques are 
> available (deadlock detection and rollback), but these can have their 
> own thorny failure states.

Yup.  His basic argument is that object-oriented programming is 
incompatible with lock-based programming because object composition can 
result in unpredictable lock interaction.  In essence, if you call into 
unknown when a lock is held then there is no way to prove your code will 
not deadlock.

> In a design based on futures, I can reason about correctness much more 
> easily because the design can be made sequential trivially -- just don't 
> compute the result until the point where the value is accessed.

I like futures, but they are structured in such a way that they still 
lend themselves to data sharing.  They're definitely better than 
traditional lock-based programming and they're a good, efficient middle 
ground for parallel/concurrent programming, but there's something to be 
said for more structured models like CSP as well.

> I agree completely with your premise that concurrency is fundamentally 
> hard.  So the goal (as I see it today) is to take as much of the 
> concurrency as possible *out* of the algorithm, and still leverage 
> multiple CPUs and solve the I/O vs. CPU problem I label as #2 above.

One thing I like about Concur is that forces the user to think in terms 
of which tasks may be run in parallel without much affecting the 
structure of the application--it's a good introduction to parallel 
programming and it can be implemented fairly cleanly entirely in library 
code.  But I think it will be a slow transition, because it's not a 
natural way for people to think about things.  A while back I read that 
most congestion on the highways exists because people all accelerate and 
decelerate at different rates, so accretion points naturally form just 
from this interaction of 'particles'.  But when people encounter a 
traffic slowdown their first thought is that there is a specific, 
localized cause: an accident occurred, etc.  In essence, people tend to 
be reasonably good at delegation, but they are worse at understanding 
the interaction between atomic tasks.  Eventually, both will be 
important, and the more machines can figure out the details for us the 
better off we'll be :-)


Sean



More information about the Digitalmars-d mailing list