Advice requested for fixing issue 17914

Nemanja Boric 4burgos at gmail.com
Wed Oct 25 15:36:12 UTC 2017


On Wednesday, 25 October 2017 at 15:32:36 UTC, Steven 
Schveighoffer wrote:
> On 10/25/17 11:12 AM, Nemanja Boric wrote:
>> On Wednesday, 25 October 2017 at 14:19:14 UTC, Jonathan M 
>> Davis wrote:
>>> On Wednesday, October 25, 2017 09:26:26 Steven Schveighoffer 
>>> via Digitalmars-d wrote:
>>>> [...]
>>>
>>> Maybe there was a change in the OS(es) being used that 
>>> affected the limit?
>>>
>> 
>> Yes, the stack is not immediately unmapped because it's very 
>> common
>> just to reset the fiber and reuse it for handling the new 
>> connection -
>> creating new fibers (and doing unmap on termination) is a 
>> problem in the
>> real life (as this is as well).
>> 
>> At sociomantic we already had this issue: 
>> https://github.com/sociomantic-tsunami/tangort/issues/2 Maybe 
>> this is the way to go - I don't see a reason why every stack 
>> should be mmaped separately.
>
> Hm... the mprotect docs specifically state that calling 
> mprotect on something that's not allocated via mmap is 
> undefined. So if you use GC to allocate Fiber stacks, you can't 
> mprotect it.
>
> I think what we need is a more configurable way to allocate 
> stacks. There is a tradeoff to mprotect vs. simple allocation, 
> and it's not obvious to choose one over the other.
>
> I still am baffled as to why this is now showing up. Perhaps if 
> you are using mmap as an allocator (as Fiber seems to be 
> doing), it doesn't count towards the limit? Maybe it's just 
> glommed into the standard allocator's space?
>
> -Steve

I'm sorry I wrote several messages in the row, as the thoughts 
were
coming to me. I think the reason is that mprotect creates a new 
range, since it
needs to have distinct protection attributes, hence doubling the 
number of mappings.

> Maybe it's just glommed into the standard allocator's space?


No, you get to see each fiber's stack allocated separately when 
you cat /proc/pid/mappings.


More information about the Digitalmars-d mailing list