[dmd-concurrency] draft 6
Michel Fortin
michel.fortin at michelf.com
Thu Jan 28 10:08:21 PST 2010
Le 2010-01-28 à 11:18, Andrei Alexandrescu a écrit :
> Heh. Good point. I should rephrase that. The point is that sometimes you call receive() without a Variant handler because you want to leave non-matching messages in the mailbox, in knowledge that you'll handle them later. Here's the rephrase:
>
> =========
> Planting a\sbs @Variant@ handler at the bottom of the message handling
> food chain is a good method to make sure that stray messages aren't
> left in your mailbox.
> =========
Better.
But then I'll ask another question (I suspect the answer might expose a problem): when do you want to handle messages immediately?
>>> The exception is only thrown if receive has no more match- ing
>>> messages and must wait for a new message; as long as receive has
>>> something to fetch from the mailbox, it will not throw. In other
>>> words, when the owner thread termi- nates, the owned threads’ calls
>>> to receive (or receiveOnly for that matter) will throw
>>> OwnerTerminated if and only if they would otherwise block waiting
>>> for a new message
>> Didn't we agree this should happen serially, in he same order the
>> messages are added to the queue? Ah, but you're explaining it
>> correctly two paragraphs below. :-)
>>> The ownership relation is not necessarily unidirectional. In fact,
>>> two threads may even own each other; in that case, whichever thread
>>> finishes will notify the other.
>> My idea with the thread termination protocol was that it would
>> prevent circular references, following the owner chain would always
>> lead you back to the main thread. It somewhat breaks the idea that
>> when the main thread terminates all threads end up notified. I don't
>> quite oppose this, but which use case do you have in mind for this?
>
> I don't. I chose to do what Erlang does. It is quite clear on bidirectional linking and in fact makes it the default, so it must have a reason. Sean?
Keep in mind that Erlang doesn't have a thread termination protocol. I was mostly worried about the effects on the termination protocol.
>>> the OwnerTerminated exception
>> So it is going to inherit from Exception after all?
>
> Yah, I think it should.
Ok, fine.
>>> If a thread exits via an exception, the exception OwnerFailed
>>> propagates to all of its owned threads by means of prioritySend.
>> I though we were in agreement that it'd be better if this was handled
>> serially by default to avoid races in the program's logic?
>
> I think it's fine to propagate serially on _success_ but not on failure. Failure has priority.
I argued previously that this is what you want sometime, but not always. If you were "copying" your file by sending its content to a browser and a failure happens while reading the end of the file, you'd want to continue sending the file up to the error point. On the other side if you're copying to another file, you might want to delete the partial copy in case of failure. In the later case, sending the event faster is an optimization, in the first case, it is not even desirable.
So my point is that sending the event through the fast track is only an optimization and that it'll sometime get in the way. Also, it introduces a risk of race in communication protocols not expecting it.
>> Also, no mention about what happens if the writer thread exits with
>> an exception. You only explain what happens if it exits normally
>> after catching the exception itself (sending a message to that thread
>> throws).
>
> I (only) explained what happens if the writer throws:
>
> =========
> In this case, @fileWriter@ returns peacefully when @main@ exits and
> everyone's happy. But what happens in the case the secondary
> thread---the writer---throws an exception? The call to the @write@
> function may fail if there's a problem writing data to @tgt at . In that
> case, the call to @send@ from the primary thread will fail by throwing
> an exception, which is exactly what should happen.
> =========
>
> I changed the paragraph to:
>
> =========
> In this case, @fileWriter@ returns peacefully when @main@ exits and
> everyone's happy. But what happens in the case the secondary
> thread---the writer---throws an exception? The call to the @write@
> function may fail if there's a problem writing data to @tgt at . In that
> case, the call to @send@ from the primary thread will fail by throwing
> an\sbs @OwnedFailed@ exception, which is exactly what should
> happen. By the way, if an owned thread exits normally (as opposed to
> throwing an exception), subsequent calls to @send@ to that thread also
> fail, just with a different exception type:\sbs @OwnedTerminated at .
> =========
That only partially address my point. In the thread termination protocol I wrote, if a thread terminates with an uncaught exception, that exception is sent back to the owner thread (and you'd get it on a call to receive). Have you decided not to do any of this?
>> On to 'shared'...
>>> The annotation helps the compiler with much more than an indication
>>> of where the variable should be allocated
>> Strange. I though 'shared' has no relation to where the data is
>> allocated. There is no way to implement thread-local memory pools in
>> the D2, since you're allowed to cast non-shared to shared when you
>> know that no one else has a reference to it, and also because of
>> immutable. So what are you trying to say exactly?
>
> For global data, shared indicates that data goes in the global memory segment, not in TLS. To avoid confusion, I just changed that to read: "The annotation helps the compiler a great deal: ..."
Ah, I see. The original text was just fine then. I just forgot about that.
>>> atomicOp!"+="(threadsCount, 1); // fine
>> Oh, wow. Can't we have a better syntax? :-/
>
> I know you'd rather define the Atomic type. I think it's better to have explicit operations, and atomicOp!"+=" is better than a million atomicBlahs. Let's collect some more opinions.
Personally, I'd rather type atomicAdd. Try typing !"+="( a couple of time for fun... Also, atomicAdd is more readable. How do you pronounce atomicOp!"+="?
>>> The shared constructor undergoes special typechecking, distinct
>>> from that of reg- ular functions. The compiler makes sure that
>>> during construction the address of the object or of a member of it
>>> does not escape to the outside. Only at the end of the con-
>>> structor, the object is “published” and ready to become visible to
>>> multiple threads.
>> That sounds wrong. I mean, it's all fine to have a constructor that
>> ensures that no reference escapes, but is 'shared' the right term for
>> this? In all other cases, 'shared' means the 'this' pointer is
>> shared, not that it can't escape.
>
> I am not sure what the rules surrounding shared constructors should be. Inside the ctor, the object is not yet shared, so for example member arrays should be initializable, just not with aliased arrays.
>
>> I think 'scope' would be a better term for 'no escape'.
>> Also, I'm a little surprised. I though the 'no escape' thing was
>> deemed too difficult to implement a few months ago. Has something
>> changed?
>
> Nothing has changed. Inside immutable and shared constructors, we are limiting what the constructors can do so we enable analysis without requiring inter-procedural reach.
I'm doubtful of how far this can go without being able to apply 'scope' to other function's arguments. But if you're trying to avoid the reference from escaping, I'd definitely go with the word 'scope', or 'lent'. Not 'shared'.
--
Michel Fortin
michel.fortin at michelf.com
http://michelf.com/
More information about the dmd-concurrency
mailing list