read() performance - Linux.too?
Unknown W. Brackets
unknown at simplemachines.org
Mon Jul 24 22:02:31 PDT 2006
Actually, I believe it's just:
import std.gc;
// ...
ubyte[] data = new ubyte[1024 * 1024];
std.gc.removeRange(data);
This tells it, afaik, not to scan the described range for pointers. It
seems to me entirely possible that the compiler could automatically
generate this code for new ubyte[] and such calls.
-[Unknown]
> "Derek Parnell" <derek at nomail.afraid.org> wrote in message
> news:dg25ykpt8kxw$.1r2mhu0u851l0.dlg at 40tude.net...
>> On Mon, 24 Jul 2006 04:55:17 +0200, Bob W wrote:
>>
>>> /*
>>> The std.file.read() function in dmd causes a performance
>>> issue after reading large files from 100MB upwards.
>>> Reading the file seems to be no problem, but cleanup
>>> afterwards takes forever.
>> Its a GC effect. The GC is scanning through the buffer looking for
>> addresses to clean up.
>
> Wouldn't it be possible to add some way of telling the GC not to scan
> something? Perhaps there's already something in std.gc, I didn't check, but
> I actually think the compiler could be doing this by checking the TypeInfo.
> I wouldn't go so far as to expect it to only scan the pointer fields of a
> struct, but at least it could ignore char[] and float[] (and other arrays
> containing non-pointer types).
>
> I've made that Universal Machine of the programming contest (see thread
> below) and am running into memory problems. I have the feeling that a lot of
> the opcodes in the machine code are considered as pointers. Memory just
> keeps growing and the GC cycles take longer and longer.
>
> It was great to write the UM without having to worry about memory, but now
> I'll have to worry about it and in a totally new way: trying to outsmart the
> GC. Either that, or malloc/memset/free : (
>
> L.
>
>
More information about the Digitalmars-d
mailing list