Null references redux

Yigal Chripun yigal100 at gmail.com
Mon Sep 28 12:35:43 PDT 2009


On 28/09/2009 15:28, Jeremie Pelletier wrote:
>>
>> here's a type-safe alternative
>> note: untested
>>
>> struct Vec3F {
>> float[3] v;
>> alias v[0] x;
>> alias v[1] y;
>> alias v[2] z;
>> }
>>
>> D provides alignment control for structs, why do we need to have a
>> separate union construct if it is just a special case of struct
>> alignment?
>
> These aliases won't compile, and that was only one out of many union uses.

what other use cases for unions exist that cannot be redesigned in a 
safer way?

>
>> IMO the use cases for union are very rare and they all can be
>> redesigned in a type safe manner.
>
> Not always true.
>
>> when software was small and simple, hand tuning code with low level
>> mechanisms (such as unions and even using assembly) made a lot of
>> sense. Today's software is typically far more complex and is way to
>> big to risk loosing safety features for marginal performance gains.
>>
>> micro optimizations simply doesn't scale.
>
> Again, that's a lazy view on programming. High level constructs are
> useful to isolate small and simple algorithms which are implemented at
> low level.

One way to define programming is "being lazy". You ask the machine to do 
your work since you are lazy to do it yourself.

your view above about simple algorithms which are implemented at low 
level is exactly the place where we disagree.

Have you ever heard of Stalin (i'm not talking about the dictator)?

I was pointing to a trade off at play here:
you can write low level hand optimized code that is hard to maintain and 
reason about (for example, providing formal proof of correctness). You 
gained some small, non scalable performance gains and lost on other 
fronts like proving correctness of your code.

the other way would be to write high level very regular code that can be 
maintained, easier to reason about and leave optimization to the tools. 
granted, there could be some initial performance hit compared to the 
previous approach but this is more portable:
hardware changes do not affect code, you just need to re-run the tool. 
new optimization techniques can be employed by running a newer version 
of the tool, etc.

I should also note that the second approach is already applied by 
compilers. unless you use inline ASM, the compiler will not use the 
entire ASM instruction set which contains special cases for performance 
tuning.

>
> These aren't just marginal performance gains, they can easily be up to
> 15-30% improvements, sometimes 50% and more. If this is too complex or
> the risk is too high for you then don't use a systems language :)

your approach makes sense if your are implementing say a calculator.
It doesn't scale to larger projects. Even C++ has overhead compared to 
assembly yet you are writing performance critical code in c++, right?

Java had a reputation of being slow yet today performance critical 
servers are written in Java and not in C++ in order to get faster 
execution.



More information about the Digitalmars-d mailing list