Lets talk about fibers

Liran Zvibel via Digitalmars-d digitalmars-d at puremagic.com
Thu Jun 4 06:42:39 PDT 2015


On Thursday, 4 June 2015 at 08:43:31 UTC, Ola Fosheim Grøstad 
wrote:
> On Thursday, 4 June 2015 at 07:24:48 UTC, Liran Zvibel wrote:
>> Since I think you won't come up with a very good case to 
>> moving them between threads on that other popular programming 
>> model,
>
> INCOMING WORKLOAD ("__" denotes yield+delay):
>
> a____aaaaaaa
>         b____bbbbbb
>         c____cccccccc
>         d____dddddd
>         e____eeeeeee
>
> SCHEDULING WITHOUT MIGRATION:
>
> CORE 1: aaaaaaaa
> CORE 2: bcdef___bbbbbbccccccccddddddeeeeeee
>
>
> SCHEDULING WITH MIGRATION:
>
> CORE 1: aaaaaaaacccccccceeeeeee
> CORE 2: bcdef___bbbbbbdddddd
>
> And this isn't even a worst case scenario. Please note that it 
> is common to start a task by looking up global caches first. So 
> this is a common pattern:
>
> 1. look up caches
> 2. wait for response
> 3. process

Fibers are good when you get tons of new work constantly.

If you just have a few things that runs forever, you're most 
probably better off with threads.

It's true that you can misuse fibers that than complains that 
things don't work well for you, but I don't think it should be 
supported by the language.

If you assume that new jobs always come in (and then you schedule 
new jobs to the more-empty fibers), there is no need to balance 
old jobs (That will finish very soon anyway).

If you have a blocking operation it should not be in fibers 
anyways.
We have a deferToThread mechanism with a thread pool that waits 
for such functions (if we want to do something that takes some 
time, or use external library).
Fibers should never ever block. If your fiber is blocking you're 
violating the model.

Fibers aren't some magic to solve every CS problem possible. 
There is a defined class of problems that work well for fibers, 
and there fibers should be utilized (and even then with great 
discipline). If your problem is not one of these -- use another 
form of concurrency/parallelism. One of my main arguments against 
Go is "If your only tool is a hammer, then every problem looks 
like a nail" -- D should not go that route.

Looking at your example -- a good scheduler should have 
distributed a-e evenly across both cores to begin with. Then a 
good fibers programmer should yield() after each unit of work, so 
aaaaaaa won't be a valid state. Finally, the blocking code should 
have run outside the fibers io scheduler, and just have that 
fiber waiting in suspended mode until it's runnable again, 
allowing other fibers to execute.



More information about the Digitalmars-d mailing list