Null references redux

language_fan foo at bar.com.invalid
Wed Sep 30 13:21:35 PDT 2009


Wed, 30 Sep 2009 12:05:29 -0400, Jeremie Pelletier thusly wrote:

> Don wrote:
>> Greater risks come from using more complicated algorithms. Brute-force
>> algorithms are always the easiest ones to get right <g>.
> 
> I'm not sure I agree with that. Those algorithms are pretty isolated and
> really easy to write unittests for so I don't see where the risk is when
> writing more complex algorithms, it's obviously harder, but not riskier.

Do you recommend writing larger algorithms like a hard real-time 
distributed (let's say e.g. for 100+ processes/nodes) garbage collector 
or even larger stuff like btrfs or ntfs file system drivers in assembly? 
Don't you care about portability? Of course it would be nice to provide 
optimal solution for each platform and for each use case, but 
unfortunately the TCO thinking managers do not often agree.



More information about the Digitalmars-d mailing list