Why is D slower than LuaJIT?
Marco Leise
Marco.Leise at gmx.de
Sat Jun 1 23:05:24 PDT 2013
Am Wed, 22 Dec 2010 17:04:21 -0500
schrieb Andreas Mayer <spam at bacon.eggs>:
> To see what performance advantage D would give me over using a scripting language, I made a small benchmark. It consists of this code:
>
> > auto L = iota(0.0, 10000000.0);
> > auto L2 = map!"a / 2"(L);
> > auto L3 = map!"a + 2"(L2);
> > auto V = reduce!"a + b"(L3);
>
> It runs in 281 ms on my computer.
>
> The same code in Lua (using LuaJIT) runs in 23 ms.
>
> That's about 10 times faster. I would have expected D to be faster. Did I do something wrong?
Actually "D" is 1.5 times faster on my computer*:
LDC** ======== 18 ms
GDC*** =========== 25 ms
LuaJIT 2.0.0 b7 ============ 27 ms
DMD ========================================= 93 ms
All compilers based on DMD 2.062 front-end.
* 64-bit Linux, 2.0 Ghz Mobile Core 2 Duo.
** based on LLVM 3.2
*** based on GCC 4.7.2
I modified the iota template to more closely reflect the one
used in the original Lua code: ---------------------
import std.algorithm;
import std.stdio;
import std.traits;
auto iota(B, E)(B begin, E end) if (isFloatingPoint!(CommonType!(B, E))) {
alias CommonType!(B, E) Value;
static struct Result
{
private Value start, end;
@property bool empty() const { return start >= end; }
@property Value front() const { return start; }
void popFront() { start++; }
}
return Result(begin, end);
}
void main() {
auto L = iota(0.0, 10000000.0),
L2 = map!(a => a / 2)(L),
L3 = map!(a => a + 2)(L2),
V = reduce!((a, b) => a + b)(L3);
writefln("%f", V);
}
--
Marco
More information about the Digitalmars-d
mailing list