John Carmack applauds D's pure attribute
foo at bar.com
Sun Feb 26 23:43:58 PST 2012
On Monday, 27 February 2012 at 04:17:24 UTC, Andrew Wiley wrote:
> On Sun, Feb 26, 2012 at 11:05 AM, Paulo Pinto
> <pjmlp at progtools.org> wrote:
>> Am 26.02.2012 17:34, schrieb so:
>>> On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:
>>>> Would this even be an issue on multicore systems where the
>>>> GC can run
>>>> concurrently? As long as the stop-the-world parts are below
>>>> some given
>>> If it is possible to guarantee that i don't think anyone
>>> would bother
>>> with manual MM.
>> Well, some game studios seem to be quite happy with XNA, which
>> implies using
>> a GC:
> I don't really see why you keep bringing up these examples.
> This is a
> performance issue, which means you can certainly ignore it and
> will still work, just not as well. I've seen 3d games in Java,
> they always suffer from an awkward pause at fairly regular
> This is why the AAA shops are still writing most of the engines
> You will always be able to find examples of developers that
> chose to ignore the issue for one reason or another.
> To make it clear, I'm not trying to antagonize you here. I
> agree that
> GC is in general a superior technical solution to manual memory
> management, and given the research going into GC technology,
> I'm sure
> that long term it's probably a good idea.
> However, I disagree with your statement that "the main issue is
> the GC needs to be optimized, not that manual memory management
> Making a GC that can run fast enough to make this sort of thing
> non-issue is currently so hard that it can only be used in
> niche situations. That will probably change, but it will
> change over the course of several years. Manual memory
> however, is here now and dead simple to use so long as the
> understands the semantics. Programming in that model is harder,
> not nearly as bad as, say, thread-based concurrency with race
> conditions and deadlock. Manual memory management is much
> simpler to
> deal with than many other things programmers already take on
> When you want your realtime application to behave in a certain
> would you rather spend months or years working on the GC and
> in a completely difficult style to deal with the issue, or use
> memory management *now* and deal with the slightly more
> programming model? Cost/benefit wise, GC just doesn't make a
> lot of
> sense in this sort of scenario unless you have a lot of
> resources to
> burn or a specific reason to choose a GC-mandatory platform.
> Again, I'm not saying GC is bad, I'm saying that in this area,
> cost/benefit ratio doesn't say you should spend your time
> the GC to make things work. For everyone else, GC is great, and
> applaud David Simcha's efforts to improve D's GC performance.
It does take years but please note that those referenced papers
are already several years old. Some are from 2005-6.
It doesn't mean D shouldn't support manual memory management but
claiming that GC doesn't work for real-time is a [religious]
myth. Clearly the cost of research has already been spent years
ago and the algorithms were already documented and tested.
OT: one of the papers was written at my university.
More information about the Digitalmars-d