Something needs to happen with shared, and soon.

luka8088 luka8088 at owave.net
Wed Nov 14 14:29:35 PST 2012


On 14.11.2012 20:54, Sean Kelly wrote:
> On Nov 13, 2012, at 1:14 AM, luka8088<luka8088 at owave.net>  wrote:
>
>> On Tuesday, 13 November 2012 at 09:11:15 UTC, luka8088 wrote:
>>> On 12.11.2012 3:30, Walter Bright wrote:
>>>> On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:
>>>>> It's starting to get outright embarrassing to talk to newcomers about D's
>>>>> concurrency support because the most fundamental part of it -- the
>>>>> shared type
>>>>> qualifier -- does not have well-defined semantics at all.
>>>>
>>>> I think a couple things are clear:
>>>>
>>>> 1. Slapping shared on a type is never going to make algorithms on that
>>>> type work in a concurrent context, regardless of what is done with
>>>> memory barriers. Memory barriers ensure sequential consistency, they do
>>>> nothing for race conditions that are sequentially consistent. Remember,
>>>> single core CPUs are all sequentially consistent, and still have major
>>>> concurrency problems. This also means that having templates accept
>>>> shared(T) as arguments and have them magically generate correct
>>>> concurrent code is a pipe dream.
>>>>
>>>> 2. The idea of shared adding memory barriers for access is not going to
>>>> ever work. Adding barriers has to be done by someone who knows what
>>>> they're doing for that particular use case, and the compiler inserting
>>>> them is not going to substitute.
>>>>
>>>>
>>>> However, and this is a big however, having shared as compiler-enforced
>>>> self-documentation is immensely useful. It flags where and when data is
>>>> being shared. So, your algorithm won't compile when you pass it a shared
>>>> type? That is because it is NEVER GOING TO WORK with a shared type. At
>>>> least you get a compile time indication of this, rather than random
>>>> runtime corruption.
>>>>
>>>> To make a shared type work in an algorithm, you have to:
>>>>
>>>> 1. ensure single threaded access by aquiring a mutex
>>>> 2. cast away shared
>>>> 3. operate on the data
>>>> 4. cast back to shared
>>>> 5. release the mutex
>>>>
>>>> Also, all op= need to be disabled for shared types.
>>>
>>>
>>> This clarifies a lot, but still a lot of people get confused with:
>>> http://dlang.org/faq.html#shared_memory_barriers
>>> is it a faq error ?
>>>
>>> and also with http://dlang.org/faq.html#shared_guarantees said, I come to think that the fact that the following code compiles is either lack of implementation, a compiler bug or a faq error ?
>>
>> //////////
>>
>> import core.thread;
>>
>> void main () {
>>   int i;
>>   (new Thread({ i++; })).start();
>> }
>
> It's intentional.  core.thread is for people who know what they're doing, and there are legitimate uses along these lines:
>
> void main() {
>      int i;
>      auto t = new Thread({i++;});
>      t.start();
>      t.join();
>      write(i);
> }
>
> This is perfectly safe and has a deterministic result.

Yes, that makes perfect sense... I just wanted to point out the 
misguidance in FAQ because (at least before this forum thread) there is 
not much written about shared and you can get a wrong idea from it (at 
least I did).


More information about the Digitalmars-d mailing list