[dmd-concurrency] synchronized, shared, and regular methods inside the same class
Álvaro Castro-Castilla
alvaro.castro.castilla at gmail.com
Wed Jan 6 05:11:29 PST 2010
I didn't invent that syntax, I remembered it from a paper and I was looking
for it. So here it is:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.5.9270
Best regards,
Álvaro Castro-Castilla
El 6 de enero de 2010 13:52, Álvaro Castro-Castilla <
alvaro.castro.castilla at gmail.com> escribió:
>
>
> 2010/1/6 Michel Fortin <michel.fortin at michelf.com>
>
> Le 2010-01-05 à 18:31, Álvaro Castro-Castilla a écrit :
>>
>> > Is there a way that to make this eventually generalizable to any
>> > operation or groups of operations that should be performed atomically?
>> >
>> > something like:
>> >
>> > atomic {
>> > ++x;
>> > }
>> >
>> > or even
>> >
>> > atomic ++x;
>> > atomic(++)x;
>>
>> I proposed this earlier:
>>
>> ++atomic(y)
>>
>> The idea is to create a function template "atomic" that would return a
>> temporary struct. All operators defined for that struct would translate to
>> atomic operations done on y, or memory barriers for read and writes.
>>
>>
>>
>
> Well, of course that makes way more sense if its a function template :). I
> was referring to a compiler
>
>
>
>> > that would create a variable doing the work of "y", but in case you
>> > want to specify the operations.
>> >
>> > atomic {
>> > int y = x;
>> > y++;
>> > x = y;
>> > }
>>
>> This syntax I don't like. While enclosing one operation in an atomic block
>> is fine, here it looks as though all those actions are done in one atomic
>> operation, which isn't at all the case. How hard is it to write this
>> instead:
>>
>> int y = atomic(x);
>> y++;
>> atomic(x) = y;
>>
>>
>
> Just to clarify better what I meant:
> atomic{} would be supported from the compiler for defining critical regions
> as transactions. It would group the whole transaction "atomicIncrement". I
> understand that generalizing this would be hard, or maybe not possible if
> you are calling functions inside that code. However, allowing only calling
> pure functions might be doable.
>
> Anyway, I just pointed this thing out, because the whole debate heads to
> Message Passing as the main way to do concurrency in D, leaving traditional
> threads and especially Software Transactional Memory behind. But how
> easy/possible would be to implement as a library? (just a question) I guess
> threads will be kept, but STM is not being mentioned. My point is that there
> are situations where MP is not the best/easiest solution. For instance, for
> multi-agent simulations, you could think that MP would work well. However,
> when you want to visualize the simulation you need to duplicate the number
> of messages and send them to a central thread for processing the whole data
> and visualize it. With normal threading this would be the typical "dirty
> flag", stop the simulation and bring the data into the vis. thread; with STM
> you could just read a snapshot of the data and visualize it with no need for
> locks.
>
> I would say:
> classical threads -> message passing -> transactional
> indicates a progression from more computational concept to more natural.
> This might be highly questionable, but I think of it as abstractions from
> the natural world. I could bore you a bit more with analogies and so on, but
> I must go back to work. What is clear at this point, however, is that we are
> not sure which one of those or other concurrency methods is going to prevail
> or coexist. Does D want to tie with MP only? or create the tools for various
> methods? At its least, I think it would be a good decision to keep MP with
> some possibilities for traditional threads. I'm a great supporter of MP,
> anyway (that's why I'm here).
>
>
> Best Regards,
>
> Álvaro Castro-Castilla
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/dmd-concurrency/attachments/20100106/e391b261/attachment.htm>
More information about the dmd-concurrency
mailing list