Disable GC entirely

Paulo Pinto pjmlp at progtools.org
Sun Apr 7 23:35:26 PDT 2013


On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote:
> On 7 April 2013 20:59, Paulo Pinto <pjmlp at progtools.org> wrote:
>
>> I am not giving up speed. It just happens that I have been 
>> coding since
>> 1986 and I am a polyglot programmer that started doing system 
>> programming
>> in the Pascal family of languages, before moving into C and 
>> C++ land.
>>
>> Except for some cases, it does not matter if you get an answer 
>> in 1s or
>> 2ms, however most single language C and C++ developers care 
>> about the 2ms
>> case even before starting to code, this is what I don't 
>> approve.
>>
>
> Bear in mind, most remaining C/C++ programmers are realtime 
> programmers,
> and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you 
> have to run
> realtime software.
> If I chose not to care about 2ms only 8 times, I'll have no 
> time left. I
> would cut off my left nut for 2ms most working days!
> I typically measure execution times in 10s of microseconds, if 
> something
> measures in milliseconds it's a catastrophe that needs to be 
> urgently
> addressed... and you're correct, as a C/C++ programmer, I DO 
> design with
> consideration for sub-ms execution times before I write a 
> single line of
> code.
> Consequently, I have seen the GC burn well into the ms on 
> occasion, and as
> such, it is completely unacceptable in realtime software.


I do understand that, the thing is that since I am coding in 
1986, I remember people complaining that C and Turbo Pascal were 
too slow, lets code everything in Assembly. Then C became 
alright, but C++ and Ada were too slow, god forbid to call 
virtual methods or do any operator calls in C++'s case.

Afterwards the same discussion came around with JVM and .NET 
environments, which while making GC widespread, also had the sad 
side-effect to make younger generations think that safe languages 
require a VM when that is not true.

Nowadays template based code beats C, systems programming is 
moving to C++ in mainstream OS, leaving C behind, while some 
security conscious areas are adopting Ada and Spark.

So for me when someone claims about the speed benefits of C and 
C++ currently have, I smile as I remember having this kind of 
discussions with C having the role of too slow language.


>
> Walter's claim is that D's inefficient GC is mitigated by the 
> fact that D
> produces less garbage than other languages, and this is true to 
> an extent.
> But given that is the case, to be reliable, it is of critical 
> importance
> that:
> a) the programmer is aware of every allocation they are making, 
> they can't
> be hidden inside benign looking library calls like 
> toUpperInPlace.
> b) all allocations should be deliberate.
> c) helpful messages/debugging features need to be available to 
> track where
> allocations are coming from. standardised statistical output 
> would be most
> helpful.
> d) alternatives need to be available for the functions that 
> allocate by
> nature, or an option for user-supplied allocators, like STL, so 
> one can
> allocate from a pool instead.
> e) D is not very good at reducing localised allocations to the 
> stack, this
> needs some attention. (array initialisation is particularly 
> dangerous)
> f) the GC could do with budgeting controls. I'd like to assign 
> it 150us per
> 16ms, and it would defer excess workload to later frames.


No doubt D's GC needs to be improved, but I doubt making D a 
manual memory managed language will improve the language's 
adoption, given that all new system programming languages either 
use GC or reference counting as default memory management.

What you need is a way to do controlled allocations for the few 
cases that there is no way around it, but this should be reserved 
for modules with system code and not scattered everywhere.

>
> Of course I think given time D compilers will be able to 
> achieve C++ like
>> performance, even with GC or who knows, a reference counted 
>> version.
>>
>> Nowadays the only place I do manual memory management is when 
>> writing
>> Assembly code.
>>
>
> Apparently you don't write realtime software. I get so 
> frustrated on this
> forum by how few people care about realtime software, or any 
> architecture
> other than x86 (no offense to you personally, it's a general 
> observation).
> Have you ever noticed how smooth and slick the iPhone UI feels? 
> It runs at
> 60hz and doesn't miss a beat. It wouldn't work in D.
> Video games can't stutter, audio/video processing can't 
> stutter. ....

I am well aware of that and actually I do follow the game 
industry quite closely, being my second interest after 
systems/distributed computing. And I used to be a IGDA member for 
quite a few years.

However I do see a lot of games being pushed out the door in 
Java, C# with local optimizations done in C and C++.

Yeah most of they are no AAA, but that does make them less 
enjoyable.

I also had the pleasure of being able to use the Native Oberon 
and AOS operating systems back in the late 90's at the 
university, desktop operating systems done in GC systems 
programming languages. Sure you could do manual memory 
management, but only via the SYSTEM pseudo module.

One of the applications was a video player, just the decoder was 
written in Assembly.

http://ignorethecode.net/blog/2009/04/22/oberon/


In the end the question is what would a D version just with 
manual memory management have as compelling feature against C++1y 
and Ada, already established languages with industry standards?

Then again my lack of experience in the embedded world 
invalidates what I think might be the right way.

--
Paulo


More information about the Digitalmars-d mailing list