Null references redux

Jeremie Pelletier jeremiep at gmail.com
Wed Sep 30 14:42:40 PDT 2009


language_fan wrote:
> Wed, 30 Sep 2009 17:05:18 -0400, Jeremie Pelletier thusly wrote:
> 
>> language_fan wrote:
>>> Wed, 30 Sep 2009 12:05:29 -0400, Jeremie Pelletier thusly wrote:
>>>
>>>> Don wrote:
>>>>> Greater risks come from using more complicated algorithms.
>>>>> Brute-force algorithms are always the easiest ones to get right <g>.
>>>> I'm not sure I agree with that. Those algorithms are pretty isolated
>>>> and really easy to write unittests for so I don't see where the risk
>>>> is when writing more complex algorithms, it's obviously harder, but
>>>> not riskier.
>>> Do you recommend writing larger algorithms like a hard real-time
>>> distributed (let's say e.g. for 100+ processes/nodes) garbage collector
>>> or even larger stuff like btrfs or ntfs file system drivers in
>>> assembly? Don't you care about portability? Of course it would be nice
>>> to provide optimal solution for each platform and for each use case,
>>> but unfortunately the TCO thinking managers do not often agree.
>> Why does everyone associate complexity with assembly? You can write a
>> more complex algorithm in the same language as the original one and get
>> quite a good performance boost (ie binary search vs walking an array).
>> Assembly is only useful to optimize when you found the optimal algorithm
>> and want to lower its overhead a step further.
>>
>> I don't recommend any language anyways, the base algorithm is often
>> independent of its implementation language, be it implemented in C#, D
>> or assembly its gonna do the same thing at different performance levels.
>>
>> For example a simple binary search is already faster in D than in say
>> JavaScript, but its even faster in assembly than in D, that doesn't make
>> your entire program harder to code, nor does it change the logic.
> 
> Well I meant that we can assume the algorithm choice is already optimal.
> 
> Porting the high level program to assembly tends to grow the line count 
> quite a bit. For instance I have experience converting Java code to 
> Scala, and C++ to Haskell. In both cases the LOC will decrease about 
> 50-90%. If you convert things like foreach, ranges, complex expressions, 
> lambdas, an scope() constructs to assembly, it will increase the line 
> count at least one order of magnitude. Reading the lower level code is 
> much harder. And you lose important safety nets like the type system.

Yeah but I don't rate my code based on the number of lines I write, but 
rather on how well it performs :)

I usually only go into assembly after profiling, or when I know from the 
start its gonna be faster, such as matrix multiplication.

If lines of code were more important than performance, you'd get entire 
OSes and all their programs written in javascript, and you'd wait 20 
minutes for your computer to boot.



More information about the Digitalmars-d mailing list