Null references redux

Jeremie Pelletier jeremiep at gmail.com
Wed Sep 30 14:05:18 PDT 2009


language_fan wrote:
> Wed, 30 Sep 2009 12:05:29 -0400, Jeremie Pelletier thusly wrote:
> 
>> Don wrote:
>>> Greater risks come from using more complicated algorithms. Brute-force
>>> algorithms are always the easiest ones to get right <g>.
>> I'm not sure I agree with that. Those algorithms are pretty isolated and
>> really easy to write unittests for so I don't see where the risk is when
>> writing more complex algorithms, it's obviously harder, but not riskier.
> 
> Do you recommend writing larger algorithms like a hard real-time 
> distributed (let's say e.g. for 100+ processes/nodes) garbage collector 
> or even larger stuff like btrfs or ntfs file system drivers in assembly? 
> Don't you care about portability? Of course it would be nice to provide 
> optimal solution for each platform and for each use case, but 
> unfortunately the TCO thinking managers do not often agree.

Why does everyone associate complexity with assembly? You can write a 
more complex algorithm in the same language as the original one and get 
quite a good performance boost (ie binary search vs walking an array). 
Assembly is only useful to optimize when you found the optimal algorithm 
and want to lower its overhead a step further.

I don't recommend any language anyways, the base algorithm is often 
independent of its implementation language, be it implemented in C#, D 
or assembly its gonna do the same thing at different performance levels.

For example a simple binary search is already faster in D than in say 
JavaScript, but its even faster in assembly than in D, that doesn't make 
your entire program harder to code, nor does it change the logic.



More information about the Digitalmars-d mailing list