Do everything in Java…

John Colvin via Digitalmars-d digitalmars-d at puremagic.com
Sun Dec 7 14:38:19 PST 2014


On Sunday, 7 December 2014 at 22:13:50 UTC, Dmitry Olshansky 
wrote:
> 08-Dec-2014 00:36, John Colvin пишет:
>> On Sunday, 7 December 2014 at 19:56:49 UTC, Dmitry Olshansky 
>> wrote:
>>> 06-Dec-2014 18:33, H. S. Teoh via Digitalmars-d пишет:
>>>> On Sat, Dec 06, 2014 at 03:26:08PM +0000, Russel Winder via
>>>> Digitalmars-d wrote:
>>>> [...]
>>>>>>   primitive are passed by value; arrays and user defined 
>>>>>> types are
>>>>>> passed by reference only (killing memory usage)
>>>>>
>>>>> Primitive types are scheduled for removal, leaving only 
>>>>> reference
>>>>> types.
>>>> [...]
>>>>
>>>> Whoa. So they're basically going to rely on JIT to convert 
>>>> those boxed
>>>> Integers into hardware ints for performance?
>>>
>>> With great success.
>>>
>>>> Sounds like I will never
>>>> consider Java for computation-heavy tasks then...
>>>
>>> Interestingly working with JVM for the last 2 years the only 
>>> problem
>>> I've found is memory usage overhead of collections and 
>>> non-trivial
>>> objects. In my tests performance of simple numeric code was 
>>> actually
>>> better with Scala (not even plain Java) then with D (LDC), 
>>> for instance.
>>
>> Got an example? I'd be interested to see a numerical-code 
>> example where
>> the JVM can beat the llvm/gcc backends on a real calculation 
>> (even if
>> it's a small one).
>
> It was trivial Gaussian integration.
> http://en.wikipedia.org/wiki/Gaussian_quadrature
>
> I do not claim code is optimal or anything, but it's line for 
> line.
>
> // D version
> import std.algorithm, std.stdio, std.datetime;
>
> auto integrate(double function(double) f, double a, double b, 
> int n){
>     auto step = (b-a)/n;
>     auto sum = 0.0;
>     auto x = a;
>     while(x<b)
>     {
>         sum += (f(x) + f(x+step))*step/2;
>         x += step;
>     }
>     return sum;
> }
>
> long timeIt(){
>     StopWatch sw;
>     sw.start();
>     auto r = integrate(x => x*x*x, 0.0, 1.0, 1000000);
>     sw.stop();
>     return sw.peek().usecs;
> }
>
> void main(){
>     auto estimate = timeIt;
>     foreach(_; 0..1000)
>         estimate = min(estimate, timeIt);
>     writef("%s sec\n", estimate/1e6);
> }
>
>
> // Scala version
>
> def integrate(f: Double => Double, a: Double, b: Double, n : 
> Int): Double = {
>     val step = (b-a)/n;
>     var sum = 0.0;
>     var x = a;
>     while(x<b)
>     {
>         sum += (f(x) + f(x+step))*step/2;
>         x += step;
>     }
>     sum
> }
>
> def timeIt() = {
>     val start = System.nanoTime();
>     val r = integrate(x => x*x*x, 0.0, 1.0, 1000000);
>     val end = System.nanoTime();
>     end - start
> }
>
> var estimate = timeIt;
> for ( _ <- 1 to 1000 )
>     estimate = Math.min(estimate, timeIt)
> printf("%s sec\n", estimate/1e9);

on my machine (Haswell i5) I get scala as taking 1.6x as long as 
the ldc version.

I don't know scala though, I compiled using -optimise, are there 
other arguments I should be using?


More information about the Digitalmars-d mailing list