read() performance - Linux.too?
Bob W
nospam at aol.com
Sun Jul 23 19:55:17 PDT 2006
/*
The std.file.read() function in dmd causes a performance
issue after reading large files from 100MB upwards.
Reading the file seems to be no problem, but cleanup
afterwards takes forever.
I am therefore using std.mmfile which works fine in the
Windows version of D, but using read() would be more
convenient in several cases.
Now a few questions:
1) Does anyone know if the read() performance problem
occurs in the Linux version of D as well?
2) Is there any info available where the real problem
sits? Allocating a few 100MB does not show the same
phenomenon and dmc's fread() function is also painless.
3) I did not find anything about this issue in Bugzilla.
Did I overlook the respective entry?
*/
// Try reading a 100MB+ file with the following
// program (some patience required):
import std.stdio, std.file;
alias writefln wrl;
void main(char[][] av) {
wrl();
if (av.length<2) {
wrl("Need file name to test read() !");
return;
}
char[] fn=av[1];
wrl("Reading '%s' ...", fn);
char[] bf=cast(char[])read(fn);
wrl("%d bytes read.",bf.length);
wrl("Doing something ...");
int n=0;
foreach(c;bf) n+=c;
wrl("Result: %s, done.",n);
wrl("Expect a delay here after reading a huge file ...");
wrl();
}
More information about the Digitalmars-d
mailing list