[phobos] parallelism segfaults

Martin Nowak dawg at dawgfoto.de
Tue Sep 13 16:46:10 PDT 2011


It's an issue with the runtime shutdown.
It ultimately unmaps all memory.

Something simpler like this will always segfault for me.

import std.parallelism, std.stdio;

void printChar(dchar c) {
     write(c);
}

void main() {
     foreach(c; "hello world\n")
         taskPool.put(task!printChar(c));
}

I can't think of any benefit in letting daemon threads continue up to  
program termination.

On Tue, 13 Sep 2011 21:37:26 +0200, David Simcha <dsimcha at gmail.com> wrote:

> Thanks for looking into this.  I had been ignoring this because I  
> thought it
> was related to 6014 (http://d.puremagic.com/issues/show_bug.cgi?id=6014).
Which is another bug/oversight in the runtime shutdown.
I've stumbled over this when sketching out the allocators.
Will clarify this with a reduced test case.

> I'm a little bit confused about what the root cause is.  How can memory  
> that
> the daemon thread still has access to be getting freed?  In terms of root
> cause, is this a bug in std.parallelism or druntime?
>
> On Tue, Sep 13, 2011 at 2:59 PM, Martin Nowak <dawg at dawgfoto.de> wrote:
>
>> I've had a look the core dumps from the sometimes failing  
>> std.parallelism
>> test.
>> The issue is one of having daemon threads running while the GC is  
>> unmapping
>> memory.
>> Usually this goes unnoticed because the parallelism threads wait in a  
>> work
>> queue condition.
>> Sometimes a daemon thread is awakening from it's GC suspend handler  
>> after
>> memory was already
>> freed. This issue is already mentioned in a comment at gc_term.
>>
>>
>> Thread  obj = Thread.getThis();
>>
>> ...
>> suspend
>> ...
>>
>> if( obj && !obj.m_lock ) // <- segfault
>>
>>
>> I think we should bluntly kill daemon threads after thread_joinAll.
>>
>> martin
>> ______________________________**_________________
>> phobos mailing list
>> phobos at puremagic.com
>> http://lists.puremagic.com/**mailman/listinfo/phobos<http://lists.puremagic.com/mailman/listinfo/phobos>


More information about the phobos mailing list