std.concurrency: Returning from spawned function

Russel Winder russel at russel.org.uk
Sat Sep 11 03:04:55 PDT 2010


On Sat, 2010-09-11 at 00:52 -0400, Sean Kelly wrote:
> dsimcha Wrote:
> 
> > I was thinking about ways to improve std.concurrency w/o compromising its
> > safety or the simplicity of what already works.  Isn't it unnecessarily
> > restrictive that a spawned function must return void?  Since the spawned
> > thread dies when the spawned function returns, the return value could safely
> > be moved to the owner thread.  Therefore, the return values wouldn't even have
> > to be immutable/shared/lacking indirection.  The return value could, for
> > example, be stored in Tid, with attempts to retrieve it blocking until the
> > spawned thread returns.
> 
> That each spawn() results in the creation of a thread whose lifetime
> ends when the function returns is an implementation details.  It could
> as easily be a thread pool that resets its TLS data when picking up a
> new operation, user-space thread, etc.  In short, I don't think that
> the behavior of a thread exiting should be a motivating factor for
> design changes.  Does this gain anything over sending a message on
> exit?

I guess it is really a question of message passing versus data
parallelism.  Clearly in a message passing idiom asynchronous function
execution can (possibly should) always be handled by void functions.  In
a data parallel context you generally want a function that returns the
value.  The idiom here is to create a sequence and then to create a new
sequence which is a function applied to each element of the old sequence
delivering a value to the new sequence -- parallel arrays.
Algorithmically the computation on each result element is independent,
even in the case where non-local read access are allowed, so this is
"embarrassingly parallel".  It is left as a runtime implementation issue
as to how the computations map to threads and thence to processors.  C
++0x doesn't really get this right, but Chapel and X10 are getting
close, but they are full PGAS (partitioned global address space)
languages, so they should do. Haskell, via DPH, is also getting there.
As indeed in Java -- assuming Java 7 ever makes it into production.

I think my real point is that data parallelism shouldn't have to be
manually constructed from asynchronous functions, as long as you have
closures -- either explicitly or implicitly (as can be constructed with
C++ and Java).

-- 
Russel.
=============================================================================
Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder at ekiga.net
41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel at russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20100911/db855c47/attachment.pgp>


More information about the Digitalmars-d mailing list