Null references redux

Don nospam at nospam.com
Thu Oct 1 01:10:24 PDT 2009


language_fan wrote:
> Wed, 30 Sep 2009 12:05:29 -0400, Jeremie Pelletier thusly wrote:
> 
>> Don wrote:
>>> Greater risks come from using more complicated algorithms. Brute-force
>>> algorithms are always the easiest ones to get right <g>.
>> I'm not sure I agree with that. Those algorithms are pretty isolated and
>> really easy to write unittests for so I don't see where the risk is when
>> writing more complex algorithms, it's obviously harder, but not riskier.
> 
> Do you recommend writing larger algorithms like a hard real-time 
> distributed (let's say e.g. for 100+ processes/nodes) garbage collector 
> or even larger stuff like btrfs or ntfs file system drivers in assembly? 
> Don't you care about portability? Of course it would be nice to provide 
> optimal solution for each platform and for each use case, but 
> unfortunately the TCO thinking managers do not often agree.

You deal with this by ensuring that you have a clear division between 
"simple but needs to be as fast as possible" (which you do low-level 
optimisation on) and "complicated, but less speed critical".
It's a classic problem of separation of concerns: you need to ensure 
that no piece of code has requirements to be fast AND clever at the same 
time.

Incidentally, it's usually not possible to make something optimally fast 
unless it's really simple.
So no, you should never do something complicated in asm.



More information about the Digitalmars-d mailing list