DIP33: A standard exception hierarchy - why out-of-memory is not recoverable

Walter Bright newshound2 at digitalmars.com
Tue Apr 2 10:40:49 PDT 2013


On 4/2/2013 3:40 AM, deadalnix wrote:
> On Monday, 1 April 2013 at 20:58:00 UTC, Walter Bright wrote:
>> On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
>> 5. Although a bad practice, destructors in the unwinding process can also
>> allocate memory, causing double-fault issues.
> Why is double fault such a big issue ?

In C++, such aborts the program as the runtime can't handle it. In general, 
though, it's a hard to reason about problem.


>> 6. Memory allocation happens a lot. This means that very few function
>> hierarchies could be marked 'nothrow'. This throws a lot of valuable
>> optimizations under the bus.
> Can we have an overview of the optimization that are thrown under the bus and
> how much gain you have from them is general ? Actual data are always better when
> discussing optimization.

For Win32 in particular, getting rid of EH frames results in a significant 
shortening of the code generated (try it and see). In general, a finally block 
defeats lots of flow analysis optimizations, and defeats enregistering of variables.


>> 7. With the multiple gigs of memory available these days, if your program runs
>> out of memory, it's a good sign there is something seriously wrong with it
>> (such as a persistent memory leak).
>>
>
> DMD regularly does.

I know DMD does, and I regard that as a problem with DMD - one that cannot be 
solved by catching out-of-memory exceptions.


More information about the Digitalmars-d mailing list