A very interesting slide deck comparing sync and async IO
deadalnix via Digitalmars-d
digitalmars-d at puremagic.com
Fri Mar 4 10:29:46 PST 2016
On Friday, 4 March 2016 at 03:14:01 UTC, Ali Çehreli wrote:
> I imagine that lost cache is one of the biggest costs in thread
> switching. It would be great if a thread could select a thread
> with something like "I'm done, now please switch to my reader".
> And that's exactly one of the benefits of fibers: two workers
> ping pong back and forth, without much risk of losing their
> cached data.
>
> Is my assumption correct?
>
> Ali
The minimal cost of a context switch is one TLB miss (~300), one
cache miss (~300) and iret (~300). But then, you usually don't do
context switch for nothing, so some work has to be done in the
context switch. This is where it gets very dependent on which
system call you use.
During its work, the system call would have evicted various cache
lines and TLB entries to put its own data there. That means that
after the context switch is done, part of your cache is gone, and
you'll need some time to start again running full speed. You may
think that it is the same with a library call, and to some extent
it is, but much worse: as kernel and userspace do not share the
same address space and access write, so you typically get way
more trashing (you can't reuse TLB entries at all for instance).
Actual numbers will vary from one chip to the other, but the
general idea remains.
More information about the Digitalmars-d
mailing list