Slow GC?
bearophile
bearophileHUGS at lycos.com
Thu Mar 13 19:38:23 PDT 2008
While writing a program I have seen it's quite slow (about 35-40 seconds to run a program that requires less than 8 seconds to run with Python), I have progressively simplified it, to the following D code, able to show a similar problem:
import std.file, std.string, std.gc;
import std.stdio: putr = writefln;
static import std.c.time;
double clock() {
auto t = std.c.time.clock();
return t/cast(double)std.c.time.CLOCKS_PER_SEC;
}
void main() {
//disable;
auto t0 = clock();
auto txt = cast(string)read("text.txt"); // 6.3 MB of text
auto t1 = clock();
auto words = txt.split();
auto t2 = clock();
putr("loading time: ", t1 - t0); // 0.08 s
putr("splitting time: ", t2 - t1); // 3.69 s with GC, 0.66 s without
// Total running time with GC = 10.85 s
}
As you can see the Running time is about 3.7 seconds, but then it needs about 7 more seconds to deallocate the memory used.
I have written the same program in Python, not using Psyco or disabling the GC or doing something fancy:
from time import clock
t0 = clock()
txt = file("text.txt").read() // 6.3 MB of text
t1 = clock()
words = txt.split()
t2 = clock()
print "loading time:", round(t1 - t0, 2) # 0.29 s
print "splitting time:", round(t2 - t1, 2) # 1.8 s
# Total running time = 3.18 s
Splitting is twice faster, but that's not the most important thing to see: when the program is finished it needs about 1.1 seconds to deallocate its memory and terminate.
It seems there is some problem in the D GC (in my original D program on a total running time of about 40 seconds, 17 are necessary for the final memory deallocation, while the same program in Python needs less than 2 seconds for the final memory + VirtualMachine deallocations).
Tests performed on a Win2K, 256 MB RAM, pentium3 500 MHz, with Python 2.5.2, and DMD 1.028. My timings are approximated but they don't need more than a 0.2 s resolution to show the situation.
Bye,
bearophile
More information about the Digitalmars-d
mailing list