DIP33: A standard exception hierarchy - why out-of-memory is not recoverable

Walter Bright newshound2 at digitalmars.com
Mon Apr 1 13:58:00 PDT 2013


On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
> It's time to clean up this mess.

About out-of-memory errors
--------------------------

These are considered non-recoverable exceptions for the following reasons:

1. I've almost never seen a program that could successfully recover from out of 
memory errors, even ones that purport to.

2. Much effort is expended trying to make them recoverable, yet it doesn't work, 
primarily because the recovery paths are never tested.

3. There are an awful lot of instances where memory is allocated - almost as 
many as allocating stack space. (Running out of stack space doesn't even throw 
an Error exception, the program is just unceremoniously aborted.)

4. Making it recoverable means that pure functions now have side effects. 
Function purity, rather than a major feature of D, would become a little-used 
sideshow of marginal utility.

5. Although a bad practice, destructors in the unwinding process can also 
allocate memory, causing double-fault issues.

6. Memory allocation happens a lot. This means that very few function 
hierarchies could be marked 'nothrow'. This throws a lot of valuable 
optimizations under the bus.

7. With the multiple gigs of memory available these days, if your program runs 
out of memory, it's a good sign there is something seriously wrong with it (such 
as a persistent memory leak).

8. If you must recover from specific out of memory possibilities, you can still 
use malloc() or some other allocation scheme that does not rely on the GC.



More information about the Digitalmars-d mailing list