DIP60: @nogc attribute

Manu via Digitalmars-d digitalmars-d at puremagic.com
Thu Apr 17 06:43:06 PDT 2014


On 17 April 2014 23:17, via Digitalmars-d <digitalmars-d at puremagic.com>wrote:

> On Thursday, 17 April 2014 at 12:20:06 UTC, Manu via Digitalmars-d wrote:
>
>> See, I just don't find managed memory incompatible with 'low level'
>> realtime or embedded code, even on tiny microcontrollers in principle.
>>
>
> RC isn't incompatible with realtime, since the overhead is O(1).
>
> But it is slower than the alternatives where you want maximum performance.
> E.g. raytracing.
>

You would never allocate in a ray tracing loop. If you need a temp, you
would use some pre-allocation strategy. This is a tiny, self-contained, and
highly specialised loop, that will always have a highly specialised
allocation strategy.
You also don't make library calls inside a raytrace loop.


And it is slower and less more "safe" than GC for long running servers that
> have uneven loads. E.g. web services.
>

Hey? I don't know what you mean.


I think it would be useful to discuss real scenarios when discussing
> performance:
>
> 1. Web server request that can be handled instantly (no database lookup):
> small memory requirements and everything is released immediately.
>
> Best strategy might be to use a release pool (allocate incrementally and
> free all upon return in one go).
>

Strings are the likely source of allocation. I don't think this suggests a
preference from GC or ARC either way. A high-frequency webserver would use
something more specialised in this case I imagine.

2. Web server, cached content-objects: lots of cycles, shared across
> threads.
>
> Best strategy is global GC.
>

You can't have web servers locking up for 10s-100s of ms at random
intervals... that's completely unacceptable.
Or if there is no realtime allocation, then management strategy is
irrelevant.

3. Non-maskable interrupt: can cut into any running code at any time. No
> deallocation must happen and can only touch code that is consistent after
> atomic single instruction CPU operations.
>
> Best strategy is preallocation and single instruction atomic communication.


Right, interrupts wouldn't go allocating from the master heap.


I don't think these scenarios are particularly relevant.

 ARC would be fine in low level code, assuming the language supported it to
>> the fullest of it's abilities.
>>
>
> Yes, but that requires whole program optimization, since function calls
> cross compilation unit boundaries frequently.


D doesn't usually have compilation unit boundaries. And even if you do,
assuming the source is available, it can still inline if it wants to, since
the source of imported modules is available while compiling a single unit.
I don't think WPO is as critical as you say.

 No. It misses basically everything that compels the change. Strings, '~',
>> closures. D largely depends on it's memory management.
>>
>
> And that is the problem. Strings can usually be owned objects.
>

I find strings are often highly shared objects.

What benefits most from GC are the big complex objects that have lots of
> links to other objects, so you get many circular references.
>
> You usually have fewer of those.
>

These tend not to change much at runtime.
Transient/temporary allocations on the other hand are very unlikely to
contain circular references.

Also, I would mark weak references explicitly.


> Take this seriously. I want to see ARC absolutely killed dead rather than
>> dismissed.
>>
>
> Why is that? I can see ARC in D3 with whole program optimization. I cannot
> see how D2 could be extended with ARC given all the other challenges.


Well it's still not clear to me what all the challenges are... that's my
point. If it's not possible, I want to know WHY.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20140417/3eec01b9/attachment.html>


More information about the Digitalmars-d mailing list