Unofficial wish list status.(Jul 2008)

superdan super at dan.org
Fri Jul 4 08:28:01 PDT 2008


Oskar Linde Wrote:

> superdan wrote:
> > Me Here Wrote:
> > 
> >> Walter Bright wrote:
> >>
> >>> Yes, but the onus will be on you (the programmer) to prevent data races and
> >>> do proper synchronization.   
> >> In the scenario described, the main thread initialises the array of data. Then,
> >> non-overlapping slices of that are tioned out to N worker threads. Only one
> >> thread ever modifies any given segment. When the worker threads are complete,
> >> the 'results' are left in the original array available in its entirety only to
> >> the main thread.
> >>
> >>> You have to be very wary of cache effects when
> >>> writing data in one thread and expecting to see it in another.
> >> Are you saying that there is some combination of OS and/or hardware L1/L2
> >> caching that would allow one thread to read a memory location (previously)
> >> modified by another thread, and see 'old data'?
> >>
> >> Cos if you are, its a deeply serious bug that if its not already very well
> >> documented by the OS writer or hardware manufacturers, then here's your chance
> >> to get slashdotted (and diggited and redited etc. all concurrently) as the
> >> discoveerer of a fatel processor flaw.
> > 
> > google for "relaxed memory consistency model" or "memory barriers". geez.
> 
> I presume the discussion regards symmetric multiprocessing (SMP).
> 
> Cache coherency is a very important element of any SMP design. It 
> basically means that caches should be fully transparent, i.e. the 
> behavior should not change by the addition or removal of caches.

you are perfectly correct... as of ten years ago. you are right in that cache coherency protocols ensure the memory model is respected regardless of adding or eliminating caches. (i should know coz i implemented a couple for a simulator.) the problem is that the memory model has been aggressively changed recently towards providing less and less implied ordering and requiring programs to write explicit synchronization directives.

> So the above scenario should never occur. If thread A writes something 
> prior to thread B reading it, B should never get the old value.

yeah the problem is it's hard to define what "prior" means.

> "Memory barriers" have nothing to do with cache consistency. A memory 
> barrier only prevents a single CPU thread from reordering load/store 
> instructions across that specific barrier.

memory barriers strengthen the relaxed memory model that was pushed aggressively by the need for faster caches.



More information about the Digitalmars-d mailing list