Network server design question

Sean Kelly sean at invisibleduck.org
Tue Aug 6 09:54:06 PDT 2013


On Aug 5, 2013, at 4:49 PM, Brad Roberts <braddr at puremagic.com> wrote:

> On 8/5/13 4:33 PM, Sean Kelly wrote:
>> 
>> 
>> Given the relatively small number of concurrent connections, you may be best off just spawning a
>> thread per connection.  The cost of context switching at that level of concurrency is reasonably
>> low, and the code will be a heck of a lot simpler than an event loop dispatching jobs to a thread
>> pool (which is the direction you might head with a larger number of connections).
> 
> I agree, with one important caveat:  converting from a blocking thread per connection model to a non-blocking pool of threads model is often essentially starting over.  Even at the 50 threads point I tend to think you've passed the point of just throwing threads at the problem.  But I'm also much more used to dealing with 10's of thousands of sockets, so my view is a tad biased.

I'm in the same boat in terms of experience, so I'm trying to resist my inclination to do things the scalable way in favor of the simplest approach that meets the stated requirements.  You're right that switching would mean a total rewrite though, except possibly if you switched to Vibe, which uses fibers to make things look like the one thread per connection approach when it's actually multiplexing.

The real tricky bit about multiplexing, however, is how to deal with situations when you need to perform IO to handle client requests.  If that IO isn't event-based as well then you're once again spawning threads to keep that IO from holding up request processing.  I'm actually kind of surprised that more current-gen APIs don't expose the file descriptor they use for their work or provide some other means of integrating into an event loop.  In a lot of cases it seems like I end up having to write my own version of whatever library just to get the scalability characteristics I require, which is a horrible use of time.


More information about the Digitalmars-d mailing list