std.parallelism equivalents for posix fork and multi-machine processing
via Digitalmars-d
digitalmars-d at puremagic.com
Thu May 14 23:19:40 PDT 2015
On Friday, 15 May 2015 at 00:07:15 UTC, Laeeth Isharc wrote:
> But why would one use python when fork itself isn't hard to use
> in a narrow sense, and neither is the kind of interprocess
> communication I would like to do for the kind of tasks I have
> in mind. It just seems to make sense to have a light wrapper.
The managing process doesn't have to be fast, but should be easy
to reconfigure. It is overall more effective (not efficient) to
use a scripting language with a REPL for scripty tasks. Forking
comes with its own set of pitfalls. The unix-way is to have a
conglomerate of simple processes tied together with a script.
Overall easier to debug and modify.
> Just because some problems in parallel processing are hard
> doesn't seem to me a reason not to do some work on addressing
> the easier ones that may in a practical sense have great value
> in having an imperfect (but real) solution for. Sometimes I
> have the sense when talking with you that the answer to any
> question is anything but D! ;) (But I am sure I must be
> mistaken!)
I would have said the same thing about Rust and Nim too. Overall,
what other people do with a tool affects the eco system and
maturity. If you do system level programming you are less
affected by the eco system then when you do higher level
task-oriented programming.
What is your mission, to solve a problem effectively now or to
start building a new framework with a time horizon measured in
years? You have to decide this first.
Then you have to decide what is more expensive, your time or
spending twice as much on CPU power (whether it is hardware or
rented time at a datacenter).
> True. But we are not speaking of getting from a raw state to
> perfection but just starting to play with the problem. If
> Walter Bright had listened to well-intentioned advice, he
> wouldn't be in the compiler business, let alone have given us
> what became D.
He set out to build a new framework with a time horizon measured
in decades. That's perfectly reasonable and what you have to
expect when starting on a new language.
If you want to build a framework for a specific use you need both
the theoretical insights and the pragmatical experience in order
to complete it in a timely manner. You need many many iterations
to get to a state where it is better (than whatever people use
today). Which is why most (sensible) engineers will pick existing
solutions that are receiving polish, rather than the next big
thing.
> Yes, indeed. But my question was more about the distinctions
> between processes and threads and the non-obvious implications
> for the design of such a framework.
If you want to use fork(), you might as well use threads, the
main distinction is that with processes you have to be explicit
about what resources to share, but after a fork() you also risk
ending up in an inconsistent state if you aren't careful.
With a fork based solution you still need to deal with a
different level of complexity than you get with a Unixy
conglomerate of simple programs that cooperate, the Unix way is
easier to debug and test, but slower than an optimized multi
threaded solution (and marginally slower than a process that fork
itself).
More information about the Digitalmars-d
mailing list