32-bit Memory Limitations
    John Demme 
    jdd at cs.columbia.edu
       
    Fri Sep  3 14:37:18 PDT 2010
    
    
  
Hello all-
I apologize since I'm sure this question has been asked numerous times 
previous, but I could not find it in the last 10k messages.
Is there a rough time line for 64-bit DMD 2 support? (For Linux--I don't 
care about Windows.)  I understand that Walter is working on it and 
certainly don't expect a firm date, but I have no sense for the amount of 
work involved... Is this a one man-week feature or several man-months?
I know GDC has 64-bit support, but it has not been synced up in some time.  
Does anyone know if this will be updated in the near future.
I ask because I have written a scientific application I would like to  
operate on a very large matrix of doubles--in the range of 200^4 elements-- 
requiring about 12GB of memory, far larger than the current ~4GB limit.  
Ideally, I'd like to ramp this up to even 20GB matrices.  I'm currently 
running on machines with 24GB of RAM (and we may upgrade a few next year) so 
this is not a performance issue, merely a software issue.
Additionally, is anyone aware of any extreme cleverness to transparently 
work around this issue?  I would imagine not, but I'm constantly amazed by 
some of the hacks I've seen.
4GB limits me to about 150^4 elements, which is acceptable for the time 
being.  As such, I'm not terribly interested in any extreme hacks to get 
around this.  I could obviously multi-process the computation (which would 
help in distributing it) but I don't need to do this yet.
(Yes, all of those exponents are 4, not 2.  This is actually a 4 dimensional 
matrix, but for the purpose of most parts of the computation, I can treat it 
like a typical 2-dim matrix.  Not relevant, I suppose, but perhaps 
interesting.)
~John
    
    
More information about the Digitalmars-d
mailing list