Precise GC state
Ola Fosheim Grøstad
ola.fosheim.grostad+dlang at gmail.com
Fri Nov 24 07:48:03 UTC 2017
On Friday, 24 November 2017 at 05:34:14 UTC, Adam Wilson wrote:
> RAII+stack allocations make sense when I care about WHEN an
> object is released and wish to provide some semblance of
> control over deallocation (although as Andrei has pointed out
> numerous time, you have no idea how many objects are hiding
> under that RAII pointer). But there are plenty of use cases
> where I really don't care when memory is released, or even how
> long it takes to release.
A GC makes most sense when the compiler fail at disproving that
an object can be part of cycle of references that can be
detached.
So it makes most sense for typically long lived objects. Imagine
if you spend all the effort you would need to put into getting a
generational GC to work well into implementing pointer analysis
to improve automatic deallocation instead…
In order to speed up generated code that allows a generational GC
to run you still would need a massive amount of work put into
pointer analysis + all the work of making a state of the art GC
runtime.
Most people consider designing and implementing pointer analysis
to be difficult. The regular data flow analysis that the D
compiler has now is trivial in comparison. Might need a new IR
for it, not sure.
> Obviously my pattern isn't "wrong" or else DMD itself is
> "wrong". It's just not your definition of "correct".
Well, you could redefine the semantics of D so that you disallow
unsafe code and possibly some other things. Then maybe have
generational GC would be easy to implement if you don't expect
better performance than any other high level language.
> Another use case where RAII makes no sense is GUI apps. The
> object graph required to maintain the state of all those
> widgets can be an absolute performance nightmare on
> destruction. Closing a window can result in the destruction of
> tens-of-thousands of objects (I've worked on codebases like
> that), all from the GUI thread, causing a hang, and a bad user
> experience. Or you could use a concurrent GC and pass off the
> collection to a different thread. (See .NET)
Sounds like someone didn't do design before the started coding
and just kept adding stuff.
Keep in mind that OS-X and iOS use reference counting for all
objects and it seems to work for them. But they also have put a
significant effort into pointer-analysis to reduce ref-count
overhead, so still quite a lot more work for the compiler
designer than plain RAII.
> Your arguments are based on a presupposition that D should only
> be used a certain way;
No, it is based on what the D language semantics are and the
stated philosophy and the required changes that it would involve.
I have no problem with D switching to generational GC. Like you I
think most programs can be made to work fine with the overhead,
but then you would need to change the philosophy that Walter is
following. You would also need to either invest a lot into
pointer analysis to keep a clean separation between GC-references
and non-GC references, or create a more unforgiving type system
that ensure such separation.
I think that having a generational GC (or other high level
low-latency solutions) probably would be a good idea, but I don't
see how anyone could convince Walter to change his mind on such
issues. Especially as there are quite a few semantic flaws in D
that would be easy to fix, that Walter will not fix because he
like D as it is or thinks it would be too much of a breaking
change.
You would need to change the D philosophy from "performant with
some convenience" to "convenience with some means to write
performant code".
I agree with you that the latter philosophy probably would
attract more users. It is hard to compete with C++ and Rust on
the former.
But I am not sure if Walter's goal is to attract as many users as
possible.
More information about the Digitalmars-d
mailing list