Compiler optimizations

dennis luehring dl.soluz at gmx.net
Sun Apr 30 11:07:22 PDT 2006


>>> int divTest2(int divisor, int total)
>>> {
>>>   int sum = 0;
>>>   for(int i = 0; i < total; i++)
>>>   {
>>>     int quotient = i * ( 1.0 / divisor ); // !!!!!!!!
>>>     sum += quotient;
>>>   }
>>> }

you say that integer devision is slower then the same division in 
floating point - or?

but what is the problem with my benchmark then?

--- sorry in c ---

#include <stdio.h>
#include <conio.h>
#include <time.h>

int main()
{
   int result = 0;
   int div = 10; //(or double) !!!

   clock_t start, finish;
   double  duration;

   start = clock();
   for(int l=0; l<100000; l++)
   {
     for(int i=0; i<10000; i++)
     {
       result += (i*div) / div;
     }
   }
   finish = clock();
   duration = (double)(finish - start) / CLOCKS_PER_SEC;

   printf("[%i] %2.1f seconds\n",result,duration);

   getch();
}

on 3Ghz:
int: ~4.8 seconds
double: ~41.7 seconds - nearly ten times slower

the problem with your benchmark is that your results are not really usable
try to write an "int doIntDivisionWithFloat(int Value, int Div)"
and benchmark this one - not do the benchmark on the half of such an 
function...

int IntDivUsingInt(int divisor, int value)
{
   return value/divisor;
}

int IntDivUsingFloat(int divisor, int value)
{
   return value * (1.0 / divisor);
}

for(int i = 0; i < longlongtime; i++)
{
   do IntDivUsingInt( 10, i*10 );
   or IntDivUsingFloat( 10, i*10 );		
}

is your IntDivUsingFloat still faster?

ciao dennis








More information about the Digitalmars-d mailing list