Escaping the Tyranny of the GC: std.rcstring, first blood
Dmitry Olshansky via Digitalmars-d
digitalmars-d at puremagic.com
Sat Sep 27 02:53:51 PDT 2014
25-Sep-2014 17:31, Ola Fosheim Grostad пишет:
> On Monday, 22 September 2014 at 19:58:31 UTC, Dmitry Olshansky wrote:
>> 22-Sep-2014 13:45, Ola Fosheim Grostad пишет:
>>> Locking fibers to threads will cost you more than using threadsafe
>>> features. One 300ms request can then starve waiting fibers even if you
>>> have 7 free threads.
>>
>> This statement doesn't make any sense taken in isolation. It lacks way
>> too much context to be informative. For instance, "locking a thread
>> for 300ms" is easily averted if all I/O and blocking sys-call are
>> managed in a separate thread pool (that may grow far beyond
>> fiber-scheduled "web" thread pool).
>>
>> And if "locked" means CPU-bound locked, then it's
>> a) hard to fix without help from OS: re-scheduling a fiber without
>> explicit yield ain't possible (it's cooperative, preemption is in the
>> domain of OS).
>
> If you porocess and compress a large dataset in one fiber you don't need
> rescheduling. You just the scheduler to pick fibers according to
> priority regardless of origin thread.
So do not. Large dataset is not something a single thread should do
anyway, just post it to the "workers" thread pool and wait on that (by
yeilding).
There is no FUNDAMENTAL problem.
>
>> b) If CPU-bound is happening more often then once in a while, then
>> fibers are poor fit anyway - threads (and pools of 'em) do exactly
>> what's needed in this case by being natively preemptive and well
>> suited for running multiple CPU intensive tasks.
>
> Not really the issue. Load comes in spikes,
You are trying to change the issue itself.
Load is multitude of requests, we are speaking of a SINGLE one taking a
lot of time. So load makes no difference here, we are talking of DoS-ish
kind of thing, not DDoS.
And my postulate is as follows: as long as one requests may take loong
amount of time, there are going to be arbitrary many such "long"
requests in row esp. on public services, that everybody tries hard to abuse.
>if you on average only have
> a couple of heavy fibers at the same time then you are fine. You can
> spawn more threads if needed, but that wont help if fibers are stuck on
> a slow thread.
Well that's convenient I won't deny, but itself it just patches up the
problem and in
non-transparent way - oh, hey 10 requests are taking too much time,
let's spawn 11-th thread.
But - if some requests may take arbitrary long to complete, just use the
separate pool for heavy work, it's _better_ design and more resilent to
"heavy" requests anyway.
>>> That's bad for latency, because then all fibers on
>>> that thread will get 300+ms in latency.
>>
>> E-hm locking threads to fibers and arbitrary latency figures have very
>> little to do with each other. The nature of that latency is extremely
>> important.
>
> If you in line behind a cpu heavy fiber then you get that effect.
Aye, I just don't see myself doing hard work on fiber. They are not
meant to do that.
>>> How anyone can disagree with this is beyond me.
>>
>> IMHO poorly formed problem statements are not going to prove your
>> point. Pardon me making a personal statement, but for instance showing
>> how Go avoids your problem and clearly specifying the exact conditions
>> that cause it would go a long way to demonstrated whatever you wanted to.
>
> Any decent framework that is concerned about latency solves this the
> same way: light threads, or events, or whatever are not locked to a
> specific thread.
They do not have thread-local by default. But anyway - ad populum.
>
> Isolates are fine, but D does not provide it afaik.
>
Would you explain?
--
Dmitry Olshansky
More information about the Digitalmars-d
mailing list