Advice requested for fixing issue 17914

Steven Schveighoffer schveiguy at yahoo.com
Wed Oct 25 13:26:26 UTC 2017


On 10/23/17 12:56 PM, Brian Schott wrote:
> Context: https://issues.dlang.org/show_bug.cgi?id=17914
> 
> I need to get this issue resolved as soon as possible so that the fix 
> makes it into the next compiler release. Because it involves cleanup 
> code in a class destructor a design change may be necessary. Who should 
> I contact to determine the best way to fix this bug?

It appears that the limitation applies to mmap calls as well, and mmap 
call to allocate the stack has been in Fiber since as far as I can tell 
the beginning. How has this not shown up before?

Regardless of the cause, this puts a limitation on the number of 
simultaneous Fibers one can have. In other words, this is not just a 
problem with Fibers not being cleaned up properly, because one may need 
more than 65k fibers actually running simultaneously. We should try to 
prevent that as a limitation.

For example, even the following code I would think is something we 
should support:

void main()
{
	import std.concurrency : Generator, yield;
	import std.stdio : File, writeln;

	auto f = File("/proc/sys/vm/max_map_count", "r");
	ulong n;
	f.readf("%d", &n);
	writeln("/proc/sys/vm/max_map_count = ", n);
	Generator!int[] gens; // retain pointers to all the generators
	foreach (i; 0 .. n + 1000)
	{
		if (i % 1000 == 0)
			writeln("i = ", i);
		gens ~= new Generator!int({ yield(1); });
	}
}

If we *can't* do this, then we should provide a way to manage the limits

I.e. there should be a way to be able to create more than the limit's 
number of fibers, but only allocate stacks when we can (and have a way 
to tell the user what's going on).

-Steve


More information about the Digitalmars-d mailing list