Code speed (and back to the memory leaks...)
Steven Schveighoffer
schveiguy at yahoo.com
Thu Apr 15 05:38:18 PDT 2010
On Wed, 14 Apr 2010 17:19:53 -0400, Joseph Wakeling
<joseph.wakeling at webdrake.net> wrote:
> /////////////////////////////////////////////////////////////////////////
> version(Tango) {
> import tango.stdc.stdio: printf;
> } else {
> import std.stdio: printf;
> }
>
> void main()
> {
> double[] x;
>
> for(uint i=0;i<100;++i) {
> x.length = 0;
>
> for(uint j=0;j<5_000;++j) {
> for(uint k=0;k<1_000;++k) {
> x ~= j*k;
> }
> }
>
> printf("At iteration %u, x has %u elements.\n",i,x.length);
> }
> }
> /////////////////////////////////////////////////////////////////////////
>
> I noticed that when compiled with LDC the memory usage stays constant,
> but in DMD the memory use blows up. Is this a D1/D2 difference (with D2
> having the more advanced features that require preservation of memory)
> or is it a bug in one or the other of the compilers ... ?
It is because D2 is more advanced in preserving memory. In D1, setting
length to zero is equivalent to also assuming it can append safely.
One thing I forgot about, there is currently a design flaw in D2 array
appending which will keep around a certain number of arrays even after all
references have been removed (i.e. the arrays should be collectable, but
they are not collected by the GC). I have a clear path on how to fix
that, but I haven't done it yet.
See this bugzilla report:
http://d.puremagic.com/issues/show_bug.cgi?id=3929
This might also help explain why the memory blows up.
-Steve
More information about the Digitalmars-d-learn
mailing list