More radical ideas about gc and reference counting
Paulo Pinto via Digitalmars-d
digitalmars-d at puremagic.com
Tue May 6 00:16:00 PDT 2014
On Tuesday, 6 May 2014 at 03:40:47 UTC, Manu via Digitalmars-d
wrote:
> On 3 May 2014 18:49, Benjamin Thaut via Digitalmars-d
> <digitalmars-d at puremagic.com> wrote:
>> Am 30.04.2014 22:21, schrieb Andrei Alexandrescu:
>>>
>>> Walter and I have had a long chat in which we figured our
>>> current
>>> offering of abstractions could be improved. Here are some
>>> thoughts.
>>> There's a lot of work ahead of us on that and I wanted to
>>> make sure
>>> we're getting full community buy-in and backup.
>>>
>>> First off, we're considering eliminating destructor calls
>>> from within
>>> the GC entirely. It makes for a faster and better GC, but the
>>> real
>>> reason here is that destructors are philosophically bankrupt
>>> in a GC
>>> environment. I think there's no need to argue that in this
>>> community.
>>>
>>> The GC never guarantees calling destructors even today, so
>>> this decision
>>> would be just a point in the definition space (albeit an
>>> extreme one).
>>>
>>> That means classes that need cleanup (either directly or by
>>> having
>>> fields that are structs with destructors) would need to
>>> garner that by
>>> other means, such as reference counting or manual. We're
>>> considering
>>> deprecating ~this() for classes in the future.
>>>
>>> Also, we're considering a revamp of built-in slices, as
>>> follows. Slices
>>> of types without destructors stay as they are.
>>>
>>> Slices T[] of structs with destructors shall be silently
>>> lowered into
>>> RCSlice!T, defined inside object.d. That type would occupy
>>> THREE words,
>>> one of which being a pointer to a reference count. That type
>>> would
>>> redefine all slice primitives to update the reference count
>>> accordingly.
>>>
>>> RCSlice!T will not convert implicitly to void[]. Explicit
>>> cast(void[])
>>> will be allowed, and will ignore the reference count (so if a
>>> void[]
>>> extracted from a T[] via a cast outlives all slices, dangling
>>> pointers
>>> will ensue).
>>>
>>> I foresee any number of theoretical and practical issues with
>>> this
>>> approach. Let's discuss some of them here.
>>>
>>>
>>> Thanks,
>>>
>>> Andrei
>>
>>
>> Honestly, that sounds like the entierly wrong apporach to me.
>> Your
>> approaching the problem in this way:
>>
>> "We can not implement a propper GC in D because the language
>> design prevents
>> us from doing so. So lets remove destructors to migate the
>> issue of false
>> pointers."
>>
>> While the approach should be.
>>
>> "The language does not allow to implement a propper GC
>> (anything else then
>> dirty mark & sweep), what needs to be changed to allow a
>> implementation of a
>> more sophisticated GC."
>
> Couldn't agree more.
> Abandoning destructors is a disaster.
> Without destructors, you effectively have manual memory
> management, or
> rather, manual 'resource' management, which is basically the
> same
> thing, even if you have a GC.
> It totally undermines the point of memory management as a
> foundational
> element of the language if most things are to require manual
> release/finalisation/destruction or whatever you wanna call it.
>
>
>> Also let me tell you that at work we have a large C# codebase
>> which heavily
>> relies on resource management. So basically every class in
>> there inherits
>> from C#'s IDisposable interface which is used to manually call
>> the finalizer
>> on the class (but the C# GC will also call that finalizer!).
>> Basically the
>> entire codebase feels like manual memory management. You have
>> to think about
>> manually destroying every class and the entire advantage of
>> having a GC,
>> e.g. not having to think about memory management and thus
>> beeing more
>> productive, vanished. It really feels like writing C++ with C#
>> syntax. Do we
>> really want that for D?
>
> This is interesting to hear someone else say this. I have
> always found
> C# - an alleged GC language - to result in extensive manual
> memory
> management in practise too.
> I've ranted enough about it already, but I have come to the firm
> conclusion that the entire premise of a mark&sweep GC is
> practically
> corrupt. Especially in D.
> Given this example that you raise with C#, and my own
> experience that
> absolutely parallels your example, I realise that GC's failure
> extends
> into far more cases than just the ones I'm usually representing.
>
> I also maintain that GC isn't future-proof in essence.
> Computers grow
> exponentially, and GC performance inversely tracks the volume of
> memory in the system. Anything with an exponential growth curve
> is
> fundamentally not future-proof.
> I predict a 2025 Wikipedia entry: "GC was a cute idea that
> existed for
> a few years in the early 2000's while memory ranged in the
> 100's mb -
> few gb's, but quickly became unsustainable as computer
> technology
> advanced".
>
Java Azul VM GC was already handling 1 TB in 2010.
http://qconsf.com/sf2010/dl/qcon-sanfran-2010/slides/GilTene_GCNirvanaHighThroughputAndLowLatencyTogether.pdf
GC is not the only way of doing automatic memory management, but
this ongoing discussion steams more from D's current GC status
and respective phobia in C world, and less from what a modern GC
is capable of.
--
Paulo
More information about the Digitalmars-d
mailing list