Random points from a D n00b CTO

Manu via Digitalmars-d digitalmars-d at puremagic.com
Mon Jul 14 07:09:10 PDT 2014


On 14 July 2014 22:39, w0rp via Digitalmars-d <digitalmars-d at puremagic.com>
wrote:

> I don't think ARC would work in D. You'd need support for controlling the
> reference count in certain situations. You'd need type signatures for
> different reference types. You'd have to remove the GC.malloc function, as
> it would actually be manually memory managed with ARC, or otherwise return
> a fatter pointer, which would make it pretty useless. I hold more hope for
> someone improving the garbage collector implementation. This will happen
> sooner when someone who needs it takes the time to write it.
>

If it were that simple, I reckon someone would have done it by now. Nobody
seems to know how to do it.

The key issue with GC is the pause time leading to dropped frames. Google
> has recently shown that you can use a garbage collector pretty excessively
> without losing any frames at 60FPS, by minimising the collection time so it
> fits within a frame.
>

It needs to fit within a *fraction* of a frame, I would be happy with
500µs. I would consider 1-2ms extremely generous; 10% or more of the CPU
time! That's a lot less game, and it's the sort of difference that becomes
clearly visible when comparing against competition.
Frame time budgets are very carefully allocated and managed.

https://www.youtube.com/watch?v=EBlTzQsUoOw#t=23m


Google aren't really in the high-end gamedev business. I'm sure I could
make Angry Birds run without dropping frames with a GC that took no more
than 6ms... a real game, not so simple.

If you wrote most of your game code with allocations you controlled and
> only had some minor GC activity with a sufficiently well written garbage
> collector, you wouldn't notice the GC getting in your way.
>

I won't notice the GC getting in the way when collection takes no more than
500µs. Tell me how that can be achieved and you'll never hear me speak of
it again.
My first impression was an incremental GC, I was talking about that for
years. The experts agree it's impossible.

The thing about GC is it's performance scales inversely with the total size
of the heap, not the number of allocations. Heaps grow with time, this
problem will only get worse! And it's execution frequency scales
exponentially as free memory decreases.
Embedded devices, and in particular video games consoles (with relatively
lots of memory), manifest the very worst of this operating environment as
standard operation.

Whichever automatic memory management scheme you choose, if you are writing
> real time applications, you will ultimately have to optimise by taking more
> control of memory at some point. There's no automatic solution for this.
>

I'm particularly worried about libraries. I can do whatever I have to with
memory that I control.
You can't approach a modern ambitious project without depending on probably
10s of libraries. I've made my displeasure of spending years of my life
re-inventing wheels for no reason other than developers memory allocation
patterns quite plan on many occasions. I'm not making this up, I have
literally wasted years of my life to date on this, and D makes the
situation much, much worse by making an incompatible memory model the
standard. I have nightmares about this stuff.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20140715/6fa1023b/attachment.html>


More information about the Digitalmars-d mailing list