Slower than Python

deadalnix deadalnix at gmail.com
Sun Mar 3 02:07:30 PST 2013


On Sunday, 3 March 2013 at 08:03:53 UTC, Russel Winder wrote:
> Yes because the C/C++/D/etc. compilers are attempting to 
> predict the
> control flow of the program in execution and optimize all cases 
> for all
> possibilities. JIT's are just focussing on the runtime 
> bottlenecks with
> the actual data as being used. This allows for more focussed 
> code
> generation in the actual context. I would suspect that in many 
> cases the
> generated code is effectively the same but JITs can often do 
> unexpected
> and faster codes because they have more data to optimize with 
> and less
> optimization to do.
>

That is theory, but in practice, it doesn't work that well : you 
have instrument the code to get measure to optimize according to 
runtime data. Plus you can't monkey patch codegen (it is 
unspecified how x86 CPU load instructions, and so it isn't 
guaranteed that the CPU won't see garbage). That cost, plus the 
cost of the optimization itself will make the whole thing worth 
it only if the win when you optimize is high.

> To say more would require actual comparative data and I suspect 
> no-one
> on this list will do that.

It isn't worth. As explained above, you can conclude that this is 
highly dependent of the code you manipulate and the runtime data 
that is throw at it. Note that I discussed with people from LLVM 
on how this can be done in statically compiled code. In fact, it 
can but it is rather complicated and not worth automate for now.


More information about the Digitalmars-d mailing list