Gary Willoughby: "Why Go's design is a disservice to intelligent programmers"
via Digitalmars-d-announce
digitalmars-d-announce at puremagic.com
Sat Mar 28 02:17:09 PDT 2015
On Friday, 27 March 2015 at 16:48:26 UTC, Sönke Ludwig wrote:
>> 1. No stack.
>
> That reduces the memory footprint, but doesn't reduce latency.
It removes hard to spot dependencies on thread local storage.
>> 2. Batching.
>
> Can you elaborate?
Using fibers you deal with a single unit. Using events you deal
with a request broken down into "atomic parts". You take a group
of events by timed priority and sort them by type. Then you
process all events of type A, then all events of type B etc.
Better cache locality, more fine grained control over scheduling,
easier to migrate to other servers etc.
But the fundamental problem with using fibers that are bound to a
thread does not depend on long running requests. You get this
also for multiple requests with normal workloads, it is rather
obvious:
@time tick 0:
Thread 1…N-1:
100 ms workloads
Thread N:
Fiber A: async request from memcache (1ms)
Fiber B: async request from memcache (1ms)
...
Fiber M: async request from memcache…
@time tick 101:
Thread 1...N-1:
free
Thread N:
Fiber A: compute load 100ms
@time tick 201:
Fiber B: compute load 100ms
etc.
Also keep in mind that in a real world setting you deal with
spikes, so the load balancer should fire up new instances a long
time before your capacity is saturated. That means you need to
balance loads over your threads if you want good average latency.
Antyhing less makes fibers a toy language feature IMO.
More information about the Digitalmars-d-announce
mailing list