Primality test function doesn't work on large numbers?

Timon Gehr via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Thu Jan 12 05:20:00 PST 2017


On 10.01.2017 04:02, Elronnd wrote:
> Thank you!  Would you mind telling me what you changed aside from pow()
> and powm()?

1. This code:

// make 2^a = integer-1
while ((integer-1)%(pow(bigint(2), a))!=0)
     a--;
m = (integer-1) / pow(bigint(2), a);

a starts out as integer-1, so this computes many big powers of two if 
integer-1 has only a few factors of 2 (which is very likely if integer 
is a random number), so I changed this to:

for(m=integer-1;m%2==0;m/=2,++a){}

where a starts from 0. Now it just counts the factors of two and m is 
computed on the fly.

2. There was a bug in the Miller-Rabin implementation. (There are two 
cases where the function should return false. Only one was present.)


> diff isn't giving me readable results, since there was some
> other stuff I trimmed out of the original file.  Also, while this is a
> *lot* better, I still get some lag generating 1024-bit primes and I
> can't generate larger primes in a reasonable amount of time.  Maybe my
> genbigint() function is to blame?  It isn't efficient:
>
> bigint genbigint(int numbits) {
>     bigint tmp;
>     while (numbits --> 0) {
>         tmp <<= 1;
>         tmp += uniform(0, 2);
>     }
>     return tmp;
> }

This should be quite fine, as its running time is linear in numbits 
(when rejection sampling to get into the right range, the running time 
will still be linear in numbits in expectation). I would expect that the 
problem is pow and powm, as for the typical bases, exponents and moduli 
you use, they have running time 
Ω((log(modulus)+log(exponent))*log(exponent)). The actual asymptotic 
running time is significantly more than this bound, as multiplication is 
done using Karatsuba.


Maybe you can get a relevant speedup by using a different bigint 
implementation. The std.bigint documentation says that it is optimized 
for numbers up to ~1000 decimal digits and that you should probably use 
GMP for larger numbers. https://dlang.org/phobos/std_bigint.html
GMP also has implementations of pow and powm built-in.


Other than that:

1. You can save a constant factor of time in genbigint by generating 
more than one bit at a time. (But I don't think this is the bottleneck.)

2. Generating only odd candidates should give you a factor of two.

3. In mrprime, you should check other small primes other than 2, as a 
random number is likely to have a small prime factor.


If what you need is some fast black-box to generate large primes, you 
can probably use existing libraries.


More information about the Digitalmars-d-learn mailing list