std.concurrency and efficient returns

dsimcha dsimcha at yahoo.com
Wed Aug 4 06:34:15 PDT 2010


== Quote from Robert Jacques (sandford at jhu.edu)'s article
> My experience with data-parallel programming leads me to believe that a
> large number of use cases could be covered by extending the D's set of
> function/member modifiers (i.e. const/shared/immutable/pure) to cover
> delegates. This would allow, for instance, a parallel foreach function to
> take a const delegate or a future function to take a shared delegate and
> thereby provide both safety and performance. Bartosz recently bloged about
> task driven parallelism in three "High Productivity Computing Systems"
> languages ( Chapel, X10, Fortress ) and criticized all three regarding
> taking the "here be dragons" approach.

Given that Bartosz is a type system guru I can see where he's coming from.
However, ironically we're talking about making such a library usable for mere
mortals and I consider myself a mere mortal when it comes the complexities of type
systems, in that I find Bartosz's posts extremely theoretical and difficult to
follow.  I actually find it easier to visualize how work can be interleaved
between threads.

Since shared is relatively new and (I think) not fully implemented, immutable is a
good example of why not everything can be easily expressed in the type system.
Immutable data has some wonderful theoretical properties, but creating immutable
data in D without either sacrificing a significant amount of efficiency (via
copying) or relying on unchecked casts, is close to impossible.  Yes, we could
have made unique a full-fledged type constructor, but that would have added
another level of complexity to the language when const/immutable already seems
complex to a lot of people.  The result of this is that much data that I share
across threads is logically immutable, but I've given up long ago on making most
cases of this statically checkable.

Also, with regard to the 10x development cost, I suspect a lot of that has to do
with getting the code to scale, too.  Code already becomes substantially harder to
write, for example, when you can't  freely heap allocate whenever you want
(because you'd bottleneck on malloc and GC).  Since we're talking about shared
memory architectures here, I'll assume the cases you're referring to don't have
thread-local heaps and that at least some synchronization points are required for
memory management.


More information about the Digitalmars-d mailing list