openMP
dsimcha
dsimcha at yahoo.com
Thu Oct 4 11:28:37 PDT 2012
Ok, I think I see where you're coming from here. I've replied to
some points below just to make sure and discuss possible
solutions.
On Thursday, 4 October 2012 at 16:07:35 UTC, David Nadlinger
wrote:
> On Wednesday, 3 October 2012 at 23:02:25 UTC, dsimcha wrote:
> Because you already have a system in place for managing these
> tasks, which is separate from std.parallelism. A reason for
> this could be that you are using a third-party library like
> libevent. Another could be that the type of workload requires
> additional problem knowledge of the scheduler so that different
> tasks don't tread on each others's toes (for example
> communicating with some servers via a pool of sockets, where
> you can handle several concurrent requests to different
> servers, but can't have two task read/write to the same socket
> at the same time, because you'd just send garbage).
>
> Really, this issue is just about extensibility and/or
> flexibility. The design of std.parallelism.Task assumes that
> all values which "becomes available at some point in the
> future" are the product of a process for which a TaskPool is a
> suitable scheduler. C++ has std::future separate from
> std::promise, C# has Task vs. TaskCompletionSource, etc.
I'll look into these when I have more time, but I guess what it
boils down to is the need to separate the **abstraction** of
something that returns a value later (I'll call that
**abstraction** futures) from the **implementation** provided by
std.parallelism (I'll call this **implementation** tasks), which
was designed only with CPU-bound tasks and multicore in mind.
On the other hand, I like std.parallelism's simplicity for
handling its charter of CPU-bound problems and multicore
parallelism. Perhaps the solution is to define another Phobos
module that models the **abstraction** of futures and provide an
adapter of some kind to make std.parallelism tasks, which are a
much lower-level concept, fit this model. I don't think the
**general abstraction** of a future should be defined in
std.parallelism, though. std.parallelism includes
parallelism-oriented things besides tasks, e.g. parallel map,
reduce, foreach. Including a more abstract model of values that
become available later would make its charter too unfocused.
>
> Maybe using the word "callback" was a bit misleading, but it
> callback would be invoked on the worker thread (or by whoever
> invokes the hypothetical Future.complete(<result>) method).
>
> Probably most trivial use case would be to set a condition
> variable in it in order to implement a waitAny(Task[]) method,
> which waits until the first of a set of tasks is completed.
> Ever wanted to wait on multiple condition variables? Or used
> select() with multiple sockets? This is what I mean.
Well, implementing something like ContinueWith or Future.complete
for std.parallelism tasks would be trivial, and I see how waitAny
could easily be implemented in terms of this. I'm not sure I
want to define an API for this in std.parallelism, though, until
we have something like a std.future and the **abstraction** of a
future is better-defined.
>
> For more advanced/application-level use cases, just look at any
> use of ContinueWith in C#. std::future::then() is also proposed
> for C++, see e.g.
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3327.pdf.
>
> I didn't really read the the N3327 paper in detail, but from a
> brief look it seems to be a nice summary of what you might want
> to do with tasks/asynchronous results – I think you could
> find it an interesting read.
I don't have time to look at these right now, but I'll definitely
look at them sometime soon. Thanks for the info.
More information about the Digitalmars-d
mailing list