Modern C++ Lamentations
Dukc
ajieskola at gmail.com
Fri Jan 4 14:00:27 UTC 2019
On Monday, 31 December 2018 at 13:20:35 UTC, Atila Neves wrote:
> Blog:
>
> https://atilanevesoncode.wordpress.com/2018/12/31/comparing-pythagorean-triples-in-c-d-and-rust/
Isn't the main problem with performance of the Timon's range loop
that it uses arbitrary-sized integers (BigInts)? I took his
example, and modified it to this:
import std.experimental.all;
import std.datetime.stopwatch : AutoStart, StopWatch;
alias then(alias a)=(r)=>map!a(r).joiner;
void main(){
auto sw = StopWatch(AutoStart.no);
if (true)
{ sw.start;
scope (success) sw.stop;
auto triples=recurrence!"a[n-1]+1"(1.BigInt)
.then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>tuple(x,y,z))))
.filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2)
.until!(t=>t[2] >= 500);
triples.each!((x,y,z){ writeln(x," ",y," ",z); });
}
writefln("Big int time is %s microseconds",
sw.peek.total!"usecs");
sw.reset;
if (true)
{ sw.start;
scope (success) sw.stop;
auto triples=recurrence!"a[n-1]+1"(1L)
.then!(z=>iota(1,z+1).then!(x=>iota(x,z+1).map!(y=>tuple(x,y,z))))
.filter!((t)=>t[0]^^2+t[1]^^2==t[2]^^2)
.until!(t=>t[2] >= 500);
triples.each!((x,y,z){ writeln(x," ",y," ",z); });
}
writefln("Long int time is %s microseconds",
sw.peek.total!"usecs");
return;
}
The output, LDC version being 1.11.0-beta2, with:
dub --compiler=ldc2 --build=release
was:
3 4 5
6 8 10
5 12 13
9 12 15
8 15 17
[...snip...]
155 468 493
232 435 493
340 357 493
190 456 494
297 396 495
Big int time is 4667925 microseconds
3 4 5
6 8 10
5 12 13
9 12 15
8 15 17
[...snip...]
155 468 493
232 435 493
340 357 493
190 456 494
297 396 495
Long int time is 951821 microseconds
That is almost five times as fast. Assuming the factor would be
the same in your blog, doesn't that account for most of the
difference between performance of D ranged and other versions?
The remaining difference might be explained with bounds-checking.
More information about the Digitalmars-d
mailing list