Is continuously seeding a random number generator performance intensive?

Jeroen Bollen jbinero at gmail.com
Fri Jan 3 05:30:07 PST 2014


On Friday, 3 January 2014 at 10:06:27 UTC, monarch_dodra wrote:
> On Friday, 3 January 2014 at 01:43:09 UTC, Chris Cain wrote:
>> So, it sounds like the OP is using the x and y coords for a 
>> seed to generate a single number and he was curious to whether 
>> it costs too much to reseed like this for every point.
>>
>> FWIW, I'm under the impression that this is a fairly common 
>> strategy, but usually when I've seen this used more than one 
>> number is generated at a time.
>
> I *thought* he wanted to do something like that, but was 
> surprised by the fact he wanted to reseed *per* element...
>
>> You can still do this, in this case. For example, divide x by 
>> 10 and generate 10 elements (column wise) in the noise map 
>> each time and it reduces the number of reseeds by a factor of 
>> 10. Some effort might be wasted, but as long as you need a 
>> decent chunk of the noise map at any one point in time, this 
>> should work out pretty well in practice.
>>
>> My biggest concern with this approach is that you must take 
>> care that your usage of seeding with x and y coordinates does 
>> not cause repeated elements to occur. For instance, using 
>> `Random rng = Random(x + y);` will _not_ work properly (unless 
>> you want strange mirroring down the diagonal in your noise 
>> map). There's numerous landmines to avoid when doing something 
>> like this. Some approaches may not be perfect, but depending 
>> on what you're doing they may be sufficient, however.
>
> The approach I'd take here is to eagerly generate a "uint[X / 
> 1000][Y / 1000]" grid, that holds randomly generated numbers, 
> to be used as seeds for individual 1000*1000 blocks of the 
> noise map. I don't know how good that is though in terms of 
> correlation...?
>
> Or, even better, to create a single generator of type "Gen", 
> and store a "Gen[X / 1000][Y / 1000]": EG: the generator, with 
> a "checkpoint" every 1_000_000 elements. This should reduce 
> correlation down to 0.
>
> AFAIK, all generators above the "linear congruential generator" 
> have a period above 10^12, so this approach should be fine. The 
> only issue with such an approach might be the size of the 
> PRNG's themselves: XOrShift, for example, is only a few bytes 
> big, so that's fine. But MT is about 700B, which is a hefty 
> amount to chug along.

I already considered this, but the problem is, I need to smoothen 
the noise, and to do that I need all surrounding 'checkpoints' 
too. This means that it'll have to load in 5 at a time.


More information about the Digitalmars-d-learn mailing list