read() performance - Linux.too?

Dave Dave_member at pathlink.com
Sun Jul 23 20:55:26 PDT 2006


Bob W wrote:
> /*
> The std.file.read() function in dmd causes a performance
> issue after reading large files from 100MB upwards.
> Reading the file seems to be no problem, but cleanup
> afterwards takes forever.
> 
> I am therefore using std.mmfile which works fine in the
> Windows version of D, but using read() would be more
> convenient in several cases.
> 
> Now a few questions:
> 
> 1) Does anyone know if the read() performance problem
> occurs in the Linux version of D as well?
> 
> 2) Is there any info available where the real problem
> sits? Allocating a few 100MB does not show the same
> phenomenon and dmc's fread() function is also painless.
> 
> 3) I did not find anything about this issue in Bugzilla.
> Did I overlook the respective entry?
> 
> */
> 
> 
> // Try reading a 100MB+ file with the following
> // program (some patience required):
> 
> import std.stdio, std.file;
> 
> alias writefln wrl;
> 
> void main(char[][] av) {
>   wrl();
>   if (av.length<2)  {
>     wrl("Need file name to test read() !");
>     return;
>   }
>   char[] fn=av[1];
>   wrl("Reading '%s' ...", fn);
>   char[] bf=cast(char[])read(fn);
>   wrl("%d bytes read.",bf.length);
>   wrl("Doing something ...");
>   int n=0;
>   foreach(c;bf)  n+=c;
>   wrl("Result: %s, done.",n);
>   wrl("Expect a delay here after reading a huge file ...");
>   wrl();
> }
> 
> 

It's more than likely the GC, the same happens w/ a program like this:

import std.outbuffer;
import std.string : atoi;
import std.stdio  : wrl = writefln;

void main(char[][] args)
{
     int n = args.length > 1 ? atoi(args[1]) : 10_000_000;
     OutBuffer b = new OutBuffer;
     for(int i = 0; i < n; i++)
     {
         b.write("hello\n");
     }
     wrl(b.toString.length);
}

Run w/o an argument (n = 10_000_000), on Windows it takes forever 
(starts swapping), on Linux it takes about a second.



More information about the Digitalmars-d mailing list