Null references redux

Jeremie Pelletier jeremiep at gmail.com
Wed Sep 30 09:05:29 PDT 2009


Don wrote:
> Jeremie Pelletier wrote:
>> Yigal Chripun wrote:
>>> On 29/09/2009 16:41, Jeremie Pelletier wrote:
>>>
>>>> What I argued about was your view on today's software being too big and
>>>> complex to bother optimize it.
>>>>
>>>
>>> that is not what I said.
>>> I was saying that hand optimized code needs to be kept at minimum and 
>>> only for visible bottlenecks, because the risk of introducing 
>>> low-level unsafe code is bigger in more complex and bigger software.
>>
>> What's wrong with taking a risk? If you know what you're doing where 
>> is the risk, and if now how will you learn? If you write your software 
>> correctly, you could add countless assembly optimizations and never 
>> compromise the security of the entire thing, because these 
>> optimizations are isolated, so if it crashes there you have only a 
>> narrow area to debug within.
>>
>> There are some parts where hand optimizing is almost useless, like 
>> network I/O since latency is already so high having a faster code 
>> won't make a difference.
>>
>> And sometimes the optimization doesn't even need assembly, it just 
>> requires using a different high level construct or a different 
>> algorithm. The first optimization is to get the most efficient data 
>> structures with the most efficient algorithms for a given task, and 
>> THEN if you can't optimize it more you dig into assembly.
>>
>> People seem to think assembly is something magical and incredibly 
>> hard, it's not.
>>
>> Jeremie
> 
> Also, if you're using asm on something other than a small, simple loop, 
> you're probably doing something badly wrong. Therefore, it should always 
> be localised, and easy to test thoroughly. I don't think local extreme 
> optimisation is a big risk.

That's also how I do it once I find the ideal algorithm, I've never had 
any problems or seen any risk with this technique, I did see some good 
performance gains however.

> Greater risks come from using more complicated algorithms. Brute-force 
> algorithms are always the easiest ones to get right <g>.

I'm not sure I agree with that. Those algorithms are pretty isolated and 
really easy to write unittests for so I don't see where the risk is when 
writing more complex algorithms, it's obviously harder, but not riskier.

On the other hand, things like GUI libraries are one big package where 
unittests are useless most of the time, that's a much greater risk even 
with straightforward and trivial code.

I read somewhere that the best optimizer is between your ears, I have 
yet to see someone or something prove that quote wrong! Besides how are 
you going to get comfortable with "complex" stuff if you never play with 
it, its really only complex when you're learning it, once it has been 
assimilated by the brain its become almost trivial to use.



More information about the Digitalmars-d mailing list