low-latency GC
oddp
oddp at posteo.de
Tue Dec 8 14:59:44 UTC 2020
On 06.12.20 06:16, Bruce Carneal via Digitalmars-d-learn wrote:
> How difficult would it be to add a, selectable, low-latency GC to dlang?
>
> Is it closer to "we cant get there from here" or "no big deal if you already have the low-latency GC
> in hand"?
>
> I've heard Walter mention performance issues (write barriers IIRC). I'm also interested in the
> GC-flavor performance trade offs but here I'm just asking about feasibility.
>
What our closest competition, Nim, is up to with their mark-and-sweep replacement ORC [1]:
ORC is the existing ARC algorithm (first shipped in version 1.2) plus a cycle collector
[...]
ARC is Nim’s pure reference-counting GC, however, many reference count operations are optimized
away: Thanks to move semantics, the construction of a data structure does not involve RC operations.
And thanks to “cursor inference”, another innovation of Nim’s ARC implementation, common data
structure traversals do not involve RC operations either!
[...]
Benchmark:
Metric/algorithm ORC Mark&Sweep
Latency (Avg) 320.49 us 65.31 ms
Latency (Max) 6.24 ms 204.79 ms
Requests/sec 30963.96 282.69
Transfer/sec 1.48 MB 13.80 KB
Max memory 137 MiB 153 MiB
That’s right, ORC is over 100 times faster than the M&S GC. The reason is that ORC only touches
memory that the mutator touches, too.
[...]
- uses 2x less memory than classical GCs
- can be orders of magnitudes faster in throughput
- offers sub-millisecond latencies
- suited for (hard) realtime systems
- no “stop the world” phase
- oblivious to the size of the heap or the used stack space.
[1] https://nim-lang.org/blog/2020/12/08/introducing-orc.html
More information about the Digitalmars-d-learn
mailing list