<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
It doesn't use std.concurrency. It just uses core.thread. For now,
core.thread doesn't take a shared delegate, meaning it bypasses the
entire shared system. Sean said a while back that eventually it
would start taking a shared delegate. When it does, I'll simply
cast the workLoop delegate to shared, thus bypassing the entire
shared system again. As the warning at the top of the module
states, it does subvert the type system to achieve completely
unchecked sharing, though this doesn't require relying on
implementation bugs, just unsafe casts. <br>
<br>
I've thought about this enough that I think trying to improve the
type system to make this lib use something other than completely
unchecked sharing, while still being useful for pedal-to-metal
parallelism, is a lost cause at least for D2. <b>Maybe</b> it's
do-able in D3. Even if it's technically do-able, I think it would
make the library so inefficient and/or the API so obtuse that for
something like this I feel strongly that unsafe "here be dragons" +
@system is the right answer. Those that want a safe multithreading
model can simply not use this module.<br>
<br>
I am completely in favor of std.parallelism coming w/ a huge warning
on it, being @system as opposed to @trusted, and not being
considered the "flagship" multhreading model. However, even TDPL
mentions the possibility of using casts to achieve unchecked
sharing, which is exactly what this module will do when core.thread
starts taking shared delegates. If D is still supposed to be a
systems language, I think dangerous, pedal-to-metal libraries like
this have their place in Phobos, as long as it's clear that that's
what they are.<br>
<br>
On 9/5/2010 3:06 AM, Andrei Alexandrescu wrote:
<blockquote cite="mid:4C834171.6010001@erdani.com" type="cite">Continuing
to catch on with older email...
<br>
<br>
David, the intent of shared is to prevent sharing of everything
that isn't shared. I didn't get to review your parallelism library
yet, but I think it's likely your library uses things that
shouldn't actually work :o). If it does, we should work towards
making the type system better to accept your code without
inadvertent sharing.
<br>
<br>
<br>
Andrei
<br>
<br>
On 07/31/2010 11:35 PM, David Simcha wrote:
<br>
<blockquote type="cite">I've started thinking about how to make
ParallelFuture jive with D's new
<br>
threading model, since it was designed before shared and
std.concurrency
<br>
were implemented and is basically designed around default
sharing.
<br>
(core.thread takes a non-shared delegate, and allows you to
completely
<br>
bypass the shared system, and from what I remember of newsgroup
<br>
discussions, this isn't going to change.)
<br>
<br>
I've re-read the concurrency chapter in TDPL and I'm still
trying to
<br>
understand what the model actually is for shared data. For
example, the
<br>
following compiles and, IIUC shouldn't:
<br>
<br>
shared real foo;
<br>
<br>
void main() {
<br>
foo++;
<br>
}
<br>
<br>
I guess my high-level question that I'm *still* not quite
getting is
<br>
"What is shared besides a piece of syntactic salt to make it
harder to
<br>
inadvertently share data across threads?"
<br>
<br>
Secondly, my parallel foreach loop implementation relies on
sharing the
<br>
current stack frame and anything reachable from it across
threads. For
<br>
example:
<br>
<br>
void main() {
<br>
auto pool = new TaskPool;
<br>
uint[] nums = fillNums();
<br>
uint modBy = getSomeOtherNum();
<br>
<br>
foreach(num; pool.parallel(nums)) {
<br>
if(isPrime(num % modBy)) {
<br>
writeln("Found prime number: ", num % modBy);
<br>
}
<br>
}
<br>
}
<br>
<br>
Allowing stuff like this is personally useful to me, but if the
idea is
<br>
that we have no implicit sharing across threads, then I don't
see how
<br>
something like this can be implemented. When you call a
parallel
<br>
foreach loop like this, **everything** on the current stack
frame is
<br>
**transitively** shared. Doing anything else would require a
complete
<br>
redesign of the library. Is calling pool.parallel enough of an
explicit
<br>
asking for "here be dragons" that the delegate should simply be
cast to
<br>
shared? If not, does anyone see any other reasonable way to do
parallel
<br>
foreach?
<br>
<br>
On 7/31/2010 7:31 AM, Andrei Alexandrescu wrote:
<br>
<blockquote type="cite">Hello,
<br>
<br>
Here's a belated answer to your question (hectic times
prevented me
<br>
from tending to non-urgent email).
<br>
<br>
I think a parallel library would be great to have as indeed
phobos is
<br>
geared at general concurrency. Such a lib would also expose
bugs and
<br>
weaknesses in our model and its implementation.
<br>
<br>
Andrei
<br>
<br>
Sent by shouting through my showerhead.
<br>
<br>
On May 30, 2010, at 12:54 PM, David Simcha
<<a class="moz-txt-link-abbreviated" href="mailto:dsimcha@gmail.com">dsimcha@gmail.com</a>
<br>
<a class="moz-txt-link-rfc2396E" href="mailto:dsimcha@gmail.com"><mailto:dsimcha@gmail.com></a>> wrote:
<br>
<br>
<blockquote type="cite">I have a few questions/comments about
the possible inclusion of a
<br>
library for parallelism in Phobos:
<br>
<br>
1. What is the status of std.concurrency? It's in the
source tree,
<br>
but it's not in the documentation or the changelogs. It
appears to
<br>
have been checked in quietly ~3 months ago, and I just
noticed now.
<br>
<br>
2. From reading the description of std.concurrency in TDPL
it seemed
<br>
more geared toward concurrency (i.e. making stuff appear to
be
<br>
happening simultaneously, useful for things like GUIs and
servers)
<br>
rather than parallelism (i.e. the use of multiple CPU cores
to
<br>
increase throughput, useful for things like scientific
computing and
<br>
video encoding). It seems fairly difficult (though I
haven't tried
<br>
yet) to write code that's designed for pull-out-all-stops
maximal
<br>
performance on a multicore machine, especially since
immutability is
<br>
somewhat of a straight jacket. I find implicit sharing and
the use
<br>
of small synchronized blocks or atomic ops to be very useful
in
<br>
writing parallel programs.
<br>
<br>
3. Most code where parallelism, as opposed to concurrency,
is the
<br>
goal (at least most that I write) is parallelized in one or
two
<br>
small, performance critical sections, and the rest is
written
<br>
serially. Therefore, it's easy to reason about things and
safety
<br>
isn't as important as the case of concurrency-oriented
multithreading
<br>
over large sections of code.
<br>
<br>
4. I've been eating my own dogfood for awhile on my
ParallelFuture
<br>
library. (<a class="moz-txt-link-freetext" href="http://cis.jhu.edu/~dsimcha/parallelFuture.html">http://cis.jhu.edu/~dsimcha/parallelFuture.html</a>
<br>
<a class="moz-txt-link-rfc2396E" href="http://cis.jhu.edu/%7Edsimcha/parallelFuture.html"><http://cis.jhu.edu/%7Edsimcha/parallelFuture.html></a>;
<br>
<a class="moz-txt-link-freetext" href="http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d">http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d</a>)
<br>
It's geared toward throughput-oriented parallelism on
multicore
<br>
machines, not concurrency for GUIs, servers, etc. and is
higher level
<br>
than std.concurrency. Is there any interest in including
something
<br>
like this in Phobos? If so, would we try to make it fit
into the
<br>
explicit-sharing-only model, or treat it as an alternative
method of
<br>
multithreading geared towards pull-out-all-stops parallelism
on
<br>
multicore computers?
<br>
<br>
One last note: Walter claimed a while back on the NG that
<br>
Parallelfuture doesn't compile. I use it regularly and it
compiles
<br>
for me. Walter, can you please point out what the issue
was?
<br>
_______________________________________________
<br>
phobos mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:phobos@puremagic.com">phobos@puremagic.com</a> <a class="moz-txt-link-rfc2396E" href="mailto:phobos@puremagic.com"><mailto:phobos@puremagic.com></a>
<br>
<a class="moz-txt-link-freetext" href="http://lists.puremagic.com/mailman/listinfo/phobos">http://lists.puremagic.com/mailman/listinfo/phobos</a>
<br>
</blockquote>
<br>
<br>
_______________________________________________
<br>
phobos mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:phobos@puremagic.com">phobos@puremagic.com</a>
<br>
<a class="moz-txt-link-freetext" href="http://lists.puremagic.com/mailman/listinfo/phobos">http://lists.puremagic.com/mailman/listinfo/phobos</a>
<br>
</blockquote>
<br>
<br>
<br>
_______________________________________________
<br>
phobos mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:phobos@puremagic.com">phobos@puremagic.com</a>
<br>
<a class="moz-txt-link-freetext" href="http://lists.puremagic.com/mailman/listinfo/phobos">http://lists.puremagic.com/mailman/listinfo/phobos</a>
<br>
</blockquote>
_______________________________________________
<br>
phobos mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:phobos@puremagic.com">phobos@puremagic.com</a>
<br>
<a class="moz-txt-link-freetext" href="http://lists.puremagic.com/mailman/listinfo/phobos">http://lists.puremagic.com/mailman/listinfo/phobos</a>
<br>
<br>
</blockquote>
<br>
</body>
</html>