Beeflang garbage seeker/collector
Carl Sturtivant
sturtivant at gmail.com
Tue Feb 20 19:57:10 UTC 2024
On Tuesday, 20 February 2024 at 18:15:45 UTC, Marconi wrote:
>> This idea is based upon a mistaken ideology instead of what is
>> actually going on with manual memory management and modern
>> garbage collectors. Explained here:
>> [Garbage Collection for Systems
>> Programmers](https://bitbashing.io/gc-for-systems-programmers.html)
>
> Garbage collection isn’t a silver bullet, as you said, so it
> should NOT be the mandatory/default memory management.
"mandatory/default" is muddling two very different things.
1. Mandatory GC.
Agreed that it should NOT be mandatory.
2. Default GC.
Makes perfect sense as per the article.
> Good programmers should have the most control of whats going on
> in their software.
Direct control? No. Only when it is necessary for the real-time
behavior of the software. This is why we have operating system
kernels and high level languages with run-time systems. So
programmers can automatically delegate much in many different
ways and have things done for them, obviating the need for low
level repetitive make-work, administration of details, error
prone stuff.
Using the word "control" here misses the point. Taking this
literally would mean not using an OS Kernel because as discussed
in the article it removes control and does not provide
guarantees! Then you'd just write code for hardware in a language
in which everything you write corresponds to a known action of
the hardware with a concrete time bound. Forth achieves this.
Sometimes it's a useful thing to do because it can guarantee
"hard real-time" when that is needed.
Mostly what is needed is "soft real-time". A modern GC gives the
fastest storage allocation as indicated in the article, and for
soft real-time is a good default solution. Manual allocation is
more expensive as indicated in the article. Worse, it actually
has less control that presumed by ideology. Quoting it:
> **`free()` is not free.** A general-purpose memory allocator
> has to maintain lots of internal, global state. What pages have
> we gotten from the kernel? How did we split those up into
> buckets for differently-sized allocations? Which of those
> buckets are in use? This gives you frequent contention between
> threads as they try to lock the allocator’s state, or you do as
> jemalloc does and keep thread-local pools that have to be
> synchronized with even more code.
>
>Tools to automate the “actually freeing the memory” part, like
>lifetimes in Rust and RAII in C++, don’t solve these problems.
>They absolutely aid correctness, something else you should care
>deeply about, but they do nothing to simplify all this
>machinery. Many scenarios also require you to fall back to
>shared_ptr/Arc, and these in turn demand even more metadata
>(reference counts) that bounces between cores and caches. And
>they leak cycles in your liveness graph to boot.
So using the OS kernel to provide manual memory management is
fraught with lack of control! But gives the illusion of control
to you. Did you read the section "the illusion of control" and
"lies people believe about memory management"? Back to the
article:
>Modern garbage collection offers optimizations that alternatives
>can not. A moving, generational GC periodically recompacts the
>heap. This provides insane throughput, since allocation is
>little more than a pointer bump! It also gives sequential
>allocations great locality, helping cache performance.
[...]
>Many developers opposed to garbage collection are building
>“soft” real-time systems. They want to go as fast as
>possible—more FPS in my video game! Better compression in my
>streaming codec! But they don’t have hard latency requirements.
>Nothing will break and nobody will die if the system
>occasionally takes an extra millisecond.
More information about the Digitalmars-d
mailing list