Fairness of thread scheduling?

Jason House jason.james.house at gmail.com
Mon Jun 4 20:42:00 PDT 2007


I have a loop that creates an arbitrary number of slave threads for 
doing work.  The intent is to kick off about as many as a computer has 
cores.  Obviously, having extra threads should also work, even if with 
slightly less performance.

I just kicked off an example with 20 worker threads (and 3 monitoring 
threads) on a single core computer.  Each does about half a million 
actions and then prints out a status update with the thread number (In 
this case 0-19).  What I see surprises me.

Thread 0: ops/sec = 26k
Thread 1: ops/sec = 19k
Thread 2: ops/sec = 13k
Thread 0: ops/sec = 27k
Thread 1: ops/sec = 19k
Thread 0: ops/sec = 25k
Thread 2: ops/sec = 13k
Thread 0: ops/sec = 22k
Thread 1: ops/sec = 17k
Thread 0: ops/sec = 22k
Thread 2: ops/sec = 14k
Thread 1: ops/sec = 17k
Thread 0: ops/sec = 22k
<killed execution>

Two big things stand out.

1. Thread 0 (22-27k ops/sec) works faster than thread 1 (17-19k 
ops/sec), which in turn works faster than thread 2 (13-14k ops/sec).
2. Threads 3-19 never give any output.

Any ideas what could be going wrong?  I've tried adding a try block 
surrounding the run method of the worker threads to output any silent 
failures (somehow).  Since I see nothing failing and I see the decreased 
performance per thread, I'm assuming thread starvation is occurring.  Is 
this expected behavior?

(I'm using dmd 1.010 under linux)


More information about the Digitalmars-d-learn mailing list