Are Gigantic Associative Arrays Now Possible?

dlangPupil via Digitalmars-d digitalmars-d at puremagic.com
Thu Mar 23 17:27:00 PDT 2017


On Thursday, 23 March 2017 at 10:27:36 UTC, Ola Fosheim Grøstad 
wrote:
>
> Increasing the size of a hash table would be prohibitively 
> expensive. You need a data-structure that can grow gracefully.

Hi Ola,

Are the hash tables you refer to the ones that D uses in the 
background to implement associative arrays, or the ones that a 
programmer might create using AAs?

--If the former, then perhaps the AA's hash function could be 
tweaked for terabyte-range SSD "RAM".  But even if the typical 
2:1 bucket/data storage ratio couldn't be improved, then creating 
32 TB of buckets would still allow 16 TB of 
nearly-instantly-addressable data (on a 4-core Xeon w/ 48 TB 
Optane SSD).

--If the latter, then my goal is design something that makes 
specific items randomly and instantly accessible, like a hash 
table, but with zero potential collisions.  Thanks!




More information about the Digitalmars-d mailing list