new principle of division between structures and classes
Andrei Alexandrescu
SeeWebsiteForEmail at erdani.org
Mon Jan 12 09:25:16 PST 2009
Brad Roberts wrote:
> Andrei Alexandrescu wrote:
>> Weed wrote:
>>> Weed пишет:
>>>
>>>>>> 4. Java and C# also uses objects by reference? But both these of
>>>>>> language are interpreted. I assume that the interpreter generally with
>>>>>> identical speed allocates memory in a heap and in a stack, therefore
>>>>>> authors of these languages and used reference model.
>>>>>>
>>>>> Neither of these languages are interpreted, they both are compiled into
>>>>> native code at runtime.
>>>> Oh!:) but I suspect such classes scheme somehow correspond with
>>>> JIT-compilation.
>>>>
>>> I guess allocation in Java occurs fast because of usage of the its own
>>> memory manager.
>>>
>>> I do not know how it is fair, but:
>>> http://www.ibm.com/developerworks/java/library/j-jtp09275.html
>>>
>>> "Pop quiz: Which language boasts faster raw allocation performance, the
>>> Java language, or C/C++? The answer may surprise you -- allocation in
>>> modern JVMs is far faster than the best performing malloc
>>> implementations. The common code path for new Object() in HotSpot 1.4.2
>>> and later is approximately 10 machine instructions (data provided by
>>> Sun; see Resources), whereas the best performing malloc implementations
>>> in C require on average between 60 and 100 instructions per call
>>> (Detlefs, et. al.; see Resources)."
>> Meh, that should be taken with a grain of salt. An allocator that only
>> bumps a pointer will simply eat more memory and be less cache-friendly.
>> Many applications aren't that thrilled with the costs of such a model.
>>
>> Andrei
>
> Take it as nicely seasoned. The current jvm gc and memory subsystem is
> _extremely_ clever. However, it completely relies on the ability to
> move objects during garbage collection. If it was purely the allocator
> that behaved that way, you'd be right. But it's interaction with the gc
> is where the system comes together to form a useful whole.
I understand. My point is that a 10-cycles-per-allocation allocator will
necessarily use more memory than one that attempts to reuse memory.
There's no way around that. I mean we know what those cycles do :o).
Some application don't work well with that. Escape analysis does reduce
the number of cache-unfriendly patterns, but, as of today, not to the
point the issue can be safely ignored.
There's no contention that GC has made great progress lately, and that's
a great thing.
Andrei
More information about the Digitalmars-d
mailing list