Some Notes on 'D for the Win'

Marco Leise via Digitalmars-d digitalmars-d at puremagic.com
Mon Aug 25 02:14:28 PDT 2014


Am Sun, 24 Aug 2014 06:39:28 +0000
schrieb "Paulo Pinto" <pjmlp at progtools.org>:

> Examples of real, working desktop OS, that people really used at 
> their work, done in system programming languages with GC.
> 
> Mesa/Cedar
> https://archive.org/details/bitsavers_xeroxparcteCedarProgrammingEnvironmentAMidtermRepo_13518000
> 
> Oberon and derivatives
> http://progtools.org/article.php?name=oberon&section=compilers&type=tutorial
> 
> SPIN
> http://en.wikipedia.org/wiki/Modula-3
> http://en.wikipedia.org/wiki/SPIN_%28operating_system%29
> 
> What is really needed for the average Joe systems programmer to 
> get over this "GC in systems programming" stygma, is getting the 
> likes of Apple, Googlel, Microsoft and such to force feed them 
> into programming languages.

Yes, but when these systems were invented, was the focus on a
fast lag free multimedia experience or on safety? How do you
get the memory for the GC heap, when you are just about to
write the kernel that manages the systems physical memory?
Do these systems manually manage memory in performance
sensitive parts or do they rely on GC as much as technically
feasible? Could they use their languages as is or did they
create a fork for their OS? What was the expected memory space
at the time of authoring the kernel? Does the language usually
allow raw pointers, unions, interfacing with C etc., or is it
more confined like Java?
I see how you can write an OS with GC already in the kernel or
whatever. However there are too many question marks to jump to
conclusions about D.

o the larger the heap the slower the collection cycles
  (how will it scale in the future with e.g. 1024 GiB RAM?)
o the less free RAM, the more often the collector is called
  (game consoles are always out of RAM)
o tracing GCs have memory overhead
  (memory, that could have been used as disk cache for example)
o some code needs to run in soft-real-time, like audio
  processing plugins; stop-the-world GC is bad here
o non-GC threads are still somewhat arcane and system
  specific
o if you accidentally store the only reference to a GC heap
  object in a non-GC thread it might get collected
  (a hypothetical or existing language may have a better
  offering here)

"For programs that cannot afford garbage collection, Modula-3
provides a set of reference types that are not traced by the
garbage collector."
Someone evaluating D may come across the question "What if I
get into one of those 10% of use case where tracing GC is not
a good option?" It might be some application developer working
for a company that sells video editing software, that has to
deal with complex projects and object graphs, playing sounds
and videos while running some background tasks like generating
preview versions of HD material or auto-saving and serializing
the project to XML.
Or someone writing an IDE auto-completion plugin that has
graphs of thousands of small objects from the source files in
import paths that are constantly modified while the user types
in the code-editor.
Plus anyone who finds him-/herself in a memory constrained
and/or (soft-)realtime environment.

Sometimes it results in the idea that D's GC is some day going
to be as fast as the primary one in Java or C#, which people
found to be acceptable for soft-real-time desktop applications.
Others start contemplating if it is worth writing their own D
runtime to remove the stop-the-world GC entirely.
Personally I mostly want to be sure Phobos is transparent about
GC allocations and that not all threads stop for the GC cycle.
That should make soft-real-time in D a lot saner :)

-- 
Marco



More information about the Digitalmars-d mailing list