Yet another parallel foreach + continue question

H. S. Teoh hsteoh at quickfur.ath.cx
Tue Jul 20 02:58:50 UTC 2021


On Tue, Jul 20, 2021 at 02:39:58AM +0000, seany via Digitalmars-d-learn wrote:
> On Tuesday, 20 July 2021 at 02:31:14 UTC, H. S. Teoh wrote:
> > On Tue, Jul 20, 2021 at 01:07:22AM +0000, seany via Digitalmars-d-learn
> > wrote:
> > > On Tuesday, 20 July 2021 at 00:37:56 UTC, H. S. Teoh wrote:
> > > > [...]
> > > 
> > > Ok, therefore it means that, if at `j = 13 `i use a continue, then
> > > the thread where I had `10`... `20` as values of `j`, will only
> > > execute for `j = 10, 11, 12 ` and will not reach `14`or later ?
> > 
> > No, it will.
> > 
> > Since each iteration is running in parallel, the fact that one of
> > them terminated early should not affect the others.
[...]
> Even tho, the workunit specified 11 values to a single thread?

Logically speaking, the size of the work unit should not change the
semantics of the loop. That's just an implementation detail that should
not affect the semantics of the overall computation.  In order to
maintain consistency, loop iterations should not affect each other
(unless they deliberately do so, e.g., read/write from a shared variable
-- but parallel foreach itself should not introduce such a dependency).

I didn't check the implementation to verify this, but I'm pretty sure
`break`, `continue`, etc., in the parallel foreach body does not change
which iteration gets run or not.

Think of it this way: when you use a parallel foreach, what you're
essentially asking for is that, logically speaking, *all* loop
iterations start in parallel (even though in actual implementation that
doesn't actually happen unless you have as many CPUs as you have
iterations). Meaning that by the time a thread gets to the `continue` in
a particular iteration, *all* of the other iterations may already have
started executing.  So it doesn't make sense for any of them to get
interrupted just because this particular iteration executes a
`continue`.  Doing otherwise would introduce all sorts of weird
inconsistent semantics that are hard (if not impossible) to reason
about.

While I'm not 100% sure this is what the current parallel foreach
implementation actually does, I'm pretty sure that's the case. It
doesn't make sense to do it any other way.


T

-- 
Ph.D. = Permanent head Damage


More information about the Digitalmars-d-learn mailing list