DIP60: @nogc attribute

Manu via Digitalmars-d digitalmars-d at puremagic.com
Wed Apr 16 21:10:32 PDT 2014


On 17 April 2014 08:42, Adam Wilson via Digitalmars-d <
digitalmars-d at puremagic.com> wrote:

> On Wed, 16 Apr 2014 04:50:51 -0700, Manu via Digitalmars-d <
> digitalmars-d at puremagic.com> wrote:
>
>  I am convinced that ARC would be acceptable, and I've never heard anyone
>>
>> suggest any proposal/fantasy/imaginary GC implementation that would be
>> acceptable...
>> In complete absence of a path towards an acceptable GC implementation, I'd
>> prefer to see people that know what they're talking about explore how
>> refcounting could be used instead.
>> GC backed ARC sounds like it would acceptably automate the circular
>> reference catching that people fuss about, while still providing a
>> workable
>> solution for embedded/realtime users; disable(/don't link) the backing GC,
>> make sure you mark weak references properly.
>>
>
> I'm just going to leave this here. I mentioned it previously in a debate
> over ARC vs. GC but I couldn't find the link at the time.
>
> http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf
>
> The paper is pretty math heavy.
>
> Long story short, Tracing vs. Ref-counting are algorithmic duals and
> therefore do not significantly differ. My read of the article is that all
> the different GC styles are doing is pushing the cost somewhere else.
>

Of course, I generally agree. Though realtime/embedded values smooth
predictability/reliability more than burst+stutter operation.

That said, I do think that GC incurs greater cost than ARC in aggregate
though. The scanning process, and the cache implications of scanning the
heap are cataclysmic. I don't imagine that some trivial inc/dec's would sum
to the same amount of work, even though they're happening more frequently.

GC has a nasty property where its workload in inversely proportional to
available memory.
As free memory decreases, frequency of scans increase. Low-memory is an
important class of native language users that shouldn't be ignored
(embedded, games consoles, etc).

Further, The cost of a GC sweep increases with the size of the heap. So, as
free memory decreases, you expect longer scans, more often... Yeah, win!

There are some other disturbing considerations; over time, as device memory
grows, GC costs will increase proportionally.
This is silly, and I'm amazed a bigger deal isn't made about the
future-proof-ness of GC. In 5 years when we all have 512gb of ram in our
devices, how much time is the GC going to spend scanning that much memory?

GC might work okay in the modern sweet-spot of 100-mb's to low-gb of total
memory, but I think as memory grows with time, GC will become more
problematic.

ARC on the other hand has a uniform, predictable, constant cost, that never
changes with respect to any of these quantities. ARC will always perform
the same speed, even 10 years from now, even on my Nintendo Wii, even on my
PIC microcontroller. As an embedded/realtime programmer, I can work with
this.


ARC may in fact be the most advantageous for a specific use case, but that
> in no way means that all use cases will see a performance improvement, and
> in all likelihood, may see a decrease in performance.
>

If you had to choose one as a default foundation, would you choose one that
eliminates a whole class of language users, or one that is an acceptable
compromise for all parties?
I'd like to see an argument for "I *need* GC. GC-backed-ARC is unacceptable
for my use case!". I'll put money on that requirement never emerging, and I
have no idea who that user would be.

Also, if you do see a decrease in performance, I suspect that it's only
under certain conditions. As said above, if your device begins to run low
on memory, or your users are working on unusually large projects/workloads,
all of a sudden your software starts performing radically differently than
you observe during development.
Naturally you don't typically profile that environment, but it's not
unlikely to occur in the wild.


That makes ARC a specialization for a certain type of programming, which
> would then remove D the "Systems" category and place it in a "Specialist"
> category.


What it does, is NOT eliminate a whole class of users. Are you going to
tell me that you have a hard dependency on the GC, and something else that
does exactly the same thing is incompatible with your requirements?
There's nothing intrinsically "systems" about GC over ARC, whatever that
means.


One could argue that due to the currently non-optional status of the GC
> that D is currently a "Specialist" language, and I would be hard pressed to
> argue against that.
>

So what's wrong with a choice that does exactly the same thing, but is less
exclusive?


@nogc removes the shackles of the GC from the language and thus brings it
> closer to the definition of "Systems". @nogc allows programmers to revert
> to C-style resource management without enforcing a specialized RM system,
> be it GC or ARC. @nogc might not make you run through the fields singing
> D's praises, but it is entirely consistent with the goals and direction of
> D.


I see some value in @nogc. I'm not arguing against it. My point what that I
feel it is missing the point, and I fear for the implications... does this
represent a dismissal of the root problem?
See my points about fracturing frameworks and libraries into isolated
worlds. This is a critical problem in C/C++ that I would do literally
anything to see not repeated in D.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20140417/63e36967/attachment.html>


More information about the Digitalmars-d mailing list