<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Andrei, <u><i><b>PLEASE</b></i></u> stop hitting reply all. Also,
please ignore the private message I sent you and read this one
instead, as I've included a reply to one more point in this one.<br>
<br>
On 8/27/2010 1:20 AM, Andrei Alexandrescu wrote:
<blockquote cite="mid:4C774B36.6090808@erdani.com" type="cite">
<blockquote type="cite">more geared toward concurrency (i.e.
making stuff appear to be happening
<br>
simultaneously, useful for things like GUIs and servers) rather
than
<br>
parallelism (i.e. the use of multiple CPU cores to increase
throughput,
<br>
useful for things like scientific computing and video
encoding). It
<br>
seems fairly difficult (though I haven't tried yet) to write
code that's
<br>
designed for pull-out-all-stops maximal performance on a
multicore
<br>
machine, especially since immutability is somewhat of a straight
<br>
jacket. I find implicit sharing and the use of small
synchronized
<br>
blocks or atomic ops to be very useful in writing parallel
programs.
<br>
</blockquote>
2. From reading the description of std.concurrency in TDPL it
seemed
<br>
<br>
You are correct on all counts. D's current concurrency mechanisms
are not geared towards parallel SIMD-style programming.
<br>
</blockquote>
<br>
I assume you meant SMP parallelism, not SIMD? ParallelFuture is
about SMP parallelism, not SIMD. SIMD == things like SSE
instructions.<br>
<blockquote cite="mid:4C774B36.6090808@erdani.com" type="cite">
<br>
<blockquote type="cite">4. I've been eating my own dogfood for
awhile on my ParallelFuture
<br>
library. (<a class="moz-txt-link-freetext" href="http://cis.jhu.edu/~dsimcha/parallelFuture.html">http://cis.jhu.edu/~dsimcha/parallelFuture.html</a>;
<br>
<a class="moz-txt-link-freetext" href="http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d">http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d</a>)
<br>
It's geared toward throughput-oriented parallelism on multicore
<br>
machines, not concurrency for GUIs, servers, etc. and is higher
level
<br>
than std.concurrency. Is there any interest in including
something like
<br>
this in Phobos? If so, would we try to make it fit into the
<br>
explicit-sharing-only model, or treat it as an alternative
method of
<br>
multithreading geared towards pull-out-all-stops parallelism on
<br>
multicore computers?
<br>
</blockquote>
<br>
There is interest. I think we should at best find the
language/library primitives necessary for making it work, and then
provide the primitives AND adopt your library into Phobos. That
way people can use your abstraction mechanisms and use their own.
<br>
</blockquote>
<br>
Can you elaborate on this? Other than the atomics stuff that you
mention below, what lower level primitives do you think need to be
exposed? IMHO tasks, parallel map and reduce and parallel foreach <b>are</b>
the most basic primitives of this library. Except the atomics
stuff, basically all the lower level stuff it uses (condition
variables, etc) is straight out of Phobos/druntime.<br>
<br>
<blockquote cite="mid:4C774B36.6090808@erdani.com" type="cite">
<br>
I see you have some CAS instructions. Sean, I think it's a good
time to collaborate with David to put them into druntime or
std.concurrency.
<br>
</blockquote>
<br>
Yeah, D needs a real atomics library. core.atomic is a good start,
but I won't use it until it can <b>efficiently</b> do things like
atomic increment.<br>
<blockquote cite="mid:4C774B36.6090808@erdani.com" type="cite">
<br>
<br>
Andrei
<br>
<br>
</blockquote>
<br>
</body>
</html>