[Off-Topic] John Carmack's point of view on GC and languages like JavaScript

wjoe invalid at example.com
Mon Aug 8 15:05:49 UTC 2022


On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:
> On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:
>> On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
>>> On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:
>>>> [...]
>>>
>>> That's kinda bullshit, it depends on the GC implementation
>>>
>>> D's GC is not good for 99.99% "of all software in the world", 
>>> it's wrong to say this, and is misleading
>>>
>>> Java's ones are, because they offer multiple implementations 
>>> that you can configure and the, they cover a wide range of 
>>> use cases
>>>
>>> D's GC is not the panacea, it's nice to have, but it's not 
>>> something to brag about, specially when it STILL stop the 
>>> world during collection, and is STILL not scalable
>>>
>>> Go did it right by focusing on low latency, and parallelism, 
>>> we should copy their GC
>>
>> It's actually 69.420% of all software in the world
>
> Exactly, hence why this quote is bullshit
>
> But nobody wants to understand the problems anymore
>
> https://discord.com/blog/why-discord-is-switching-from-go-to-rust
>
> Let's miss every opportunities to catch market shares

I don't see how that is related. According to the investigation 
they described in the article you linked, Go's GC is set up to 
run every 2 minutes no questions asked. That's not true for D's 
GC.
Instead of jumping on the rust hype train they could have forked 
Go's GC and solved the actual performance problem - the forced 2 
minutes GC run.

As far as D's default GC is concerned. Last time I checked it 
only runs a collection cycle on an allocation, further, once the 
GC has allocated the memory from the OS it won't release it back 
until the program terminates.
This means that the GC can re-alloc previously allocated, but now 
collected, memory basically for free, because there's not context 
switch into kernel and back. Which may have additional cost of 
reloading cache lines. But all of this depends on a lot of 
factors so this may or may not be a big deal.

Also, when you run your own memory management, you need to keep 
in mind that your manual call to  *alloc/free is just as 
expensive as if the GC calls it. You also need to keep in mind 
that your super fast allocator (as in the lib/system call you use 
to allocate the memory) may not actually allocate the memory on 
your call but the real allocation may be deferred until such time 
when the memory is actually accessed, which may cause lag akin to 
that of a collection cycle, depending on the amount of memory you 
allocate.

It's possible to pre-allocate memory with a GC. Re-use those 
buffers, and slice them as you see fit. Without ever triggering a 
collection cycle.
You can also disable garbage collection for D'c GC for hot areas.

IME the GC saves a lot of headaches, much more than it causes and 
I'd much rather have more convenience in communicating my 
intentions to the GC than cluttering every API with allocator 
parameters.

Something like:

```
@GC(DND)//Do Not Disturb
{
   foreach (...)
     // hot code goes here and no collection cycles will happen
}
```
or,
```
void load_assets()
{
   // allocate, load stuff, etc..
   @GC(collect); // lag doesn't matter here
}
```


More information about the Digitalmars-d mailing list