D on next-gen consoles and for game development

Manu turkeyman at gmail.com
Fri May 24 07:50:43 PDT 2013


On 24 May 2013 20:24, Regan Heath <regan at netmail.co.nz> wrote:

> On Fri, 24 May 2013 01:11:17 +0100, Manu <turkeyman at gmail.com> wrote:
>
>> /agree, except the issue I raised, when ~ is used in phobos.
>> That means that function is now off-limits. And there's no way to know
>> which functions they are...
>>
>
> It's not the allocation caused by ~ which is the issue though is it, it's
> the collection it might trigger, right?
>

Yes, but the unpredictability is the real concern. It's hard to control
something that you don't know about.
If the phobos function can avoid the allocation, then why not avoid it?


So what you really need are 3 main things:
>
> 1. A way to prevent the GC collecting until a given point(*).
> 2. A way to limit the GC collection time.
> 3. For phobos functions to be optimised to not allocate or to use alloca
> where possible.
>
> #1 should be trivial.
>

I think we can already do this.

#2 is much harder to achieve in the general case (I believe).
>

The incremental(+precise) GC idea, I think this would be the silver bullet
for games!

#3 is not essential but desirable and could be added/improved over time.
>

Yes, I think effort to improve this would be universally appreciated.


(*) Until the collection point the GC would ask the OS for more memory (a
> new pool or page) or fail and throw an Error.  Much like in Leandro's
> concurrent GC talk/example where he talks about eager allocation.
>

Bear in mind, most embedded hardware does now have virtual memory, and
often a fairly small hard limit.
If we are trying to manually sequence out allocations and collects, like
schedule collects when you change scenes on a black screen or something for
instance, then you can't have random phobos functions littering small
allocations all over the place.
The usefulness of #1 depends largely on #3.


In order to make #2 easier to achieve I had an idea, not sure how workable
> this is..
>
> Lets imagine you can mark a thread as not stopped by the pause-the-world.
>  Lets imagine it still does allocations which we want to collect at some
> stage.  How would this work..
>
> 1. The GC would remove the thread stack and global space from it's list of
> roots scanned by normal collections.  It would not pause it on normal
> collections.
>
> 2. (*) above would be in effect, the first allocation in the thread would
> cause the GC to create a thread local pool, this pool would not be shared
> by other threads (no locking required, not scanned by normal GC
> collections).  This pool could be pre-allocated by a new GC primitive
> "GC.allocLocalPool();" for efficiency.  Allocation would come from this
> thread-local pool, or trigger a new pool allocation - so minimal locking
> should be required.
>
> 3. The thread would call a new GC primitive at the point where collection
> was desired i.e. "GC.localCollect(size_t maxMicroSecs);".  This collection
> would be special, it would not stop the thread, but would occur inline.  It
> would only scan the thread local pool and would do so with an enforced
> upper bound collection time.
>
> 4. There are going to be issues around 'shared' /mutable/ data, e.g.
>
>  - The non-paused thread accessing it (esp during collection)
>  - If the thread allocated 'shared' data
>
> I am hoping that if the thread main function is marked as @notpaused (or
> similar) then the compiler can statically verify neither of these occur and
> produce a compile time error.
>
> So, that's the idea.  I don't know the current GC all that well so I've
> probably missed something crucial.  I doubt this idea is revolutionary and
> it is perhaps debatable whether the complexity is worth the effort, also
> whether it actually makes placing an upper bound on the collection any
> easier.
>
> Thoughts?


It sounds kinda complex... but I'm not qualified to comment.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20130525/9645b5ff/attachment-0001.html>


More information about the Digitalmars-d mailing list