Wish: Variable Not Used Warning

Markus Koskimies markus at reaaliaika.net
Fri Jul 11 12:46:01 PDT 2008


On Fri, 11 Jul 2008 17:10:50 +0000, BCS wrote:

> The spesific effect I was talking about is not in the slides. If you
> havn't seen the video, you didn't see the part I was refering to.
> 
> 
> int[1000] data;
> 
> thread 1:
>    for(int i = 1000_000); i; i--) data[0]++;
> 
> thread 2a:
>    for(int i = 1000_000); i; i--) data[1]++;
> 
> thread 2b:
>    for(int i = 1000_000); i; i--) data[999]++;
> 
> 
> On a multi core system run thread 1 and 2a and then run 1 and 2b. You
> will see a difference.

Sure I will. In the first example the caches of processor cores will be 
constantly negotiating the cache contents. If you are writing program 
with threads intensively accessing the same data structures, you need to 
know what you are doing.

There is a big difference of doing:

1)

	int thread_status[1000];

	thread_code() { ... thread_status[my_id] = X ... }

2)

	Thread* threads[1000];

	Thread
	{ 
		int status;

		run() { ... status = X ... }
	}

In the first example, you use a global data structure for threads and 
that can always cause problems. The entire cache system is based on 
locality; without locality in software it will not work. In that example, 
you would need to know the details of the cache system to align the data 
correctly.

In the second example, the thread table is global, yes; but the data 
structures for threads get allocated from heap, and they are local. 
Whenever they are allocated from the same cache line or not depends on 
the operating system as well as the runtime library (implementation of 
heap; does it align the blocks to cache lines or not).

Doing threaded code, I would always suggest to try to minimize the 
accesses to global data structures, and try to always use local data. 
Most probably every forthcoming processor architecture tries to improve 
the effectiveness of such threads.

I would also try to use the standard thread libraries, since they try to 
tackle the machine-dependent bottlenecks.



More information about the Digitalmars-d mailing list