d-programming-language.org
Timon Gehr
timon.gehr at gmx.ch
Sun Jul 3 13:20:50 PDT 2011
eles wrote:
> 1. please update http://www.d-programming-language.org/faq.html#q6
> ("Why fall through on switch statements?").
>
> "The reason D doesn't change this is for the same reason that
> integral promotion rules and operator precedence rules were kept the
> same - to make code that looks the same as in C operate the same. If
> it had subtly different semantics, it will cause frustratingly subtle
> bugs." was a good reason at its time, it is no more (since code is
> simply *not* accepted).
>
> now, flames are on (again) - well, I have no alternative.
I have to disagree.
>
> 2. http://www.d-programming-language.org/faq.html#case_range (Why
> doesn't the case range statement use the case X..Y: syntax?)
>
> saying that "Case (1) has a VERY DIFFERENT meaning from (2) and (3).
> (1) is inclusive of Y, and (2) and (3) are exclusive of Y. Having a
> very different meaning means it should have a distinctly different
> syntax." is a POST-rationalization (and, personally speaking, a real
> shame for such a nice language as D) of a bad choice that was
> exclusive-right limit in x..y syntax. Just imagine that you could
> allow "case X..Y" syntax and avoid that explanation (and faq).
>
Agreed, that answer does not make much sense. The reason why the current syntax
case x: .. case y: is better is this:
case x..y: // case statement that matches a range.
case x: .. case y: // range of case statements.
The first one would be the wrong way round.
As to the included/excluded inconsistency: Different applications require
different conventions.
After all, x..y is just a pair that ought to be interpreted as a range.
It is a bit unfortunate, but I am quite sure it is the right decision/a reasonable
trade-off.
> I made several arguments for changing this syntax:
> -logic (not just code-writing or compile-writing easiness) consistency
a[0..$];
a[0..$-1];
Which one does look more consistent (assuming both should return a slice of the
entire array a) and why?
> -consistent representation on the same number of bits of the maximum
> index of an array (while the length is not representable)
This is not a valid argument, please let it rest.
An array whose length is not representable is itself not representable, because it
does not fit into your machine's address space.
> -the fact that multi-dimensional slicing (possibly, a future feature)
> is far more convenient when x..y is inclusive (just imagine
> remembering which those elements are left out on those many-
> dimensions of the data)
How would that work? Array slices are references to the same data.
It is quite safe to say D will *never* have multi-dimensional built-in array slicing.
> -the fact that Ruby has implemented the inclusive syntax too (so
> there was some need to do that although the right-exclusive syntax
> was already available)
a[i,j]; //start+length in ruby iirc
a[i..j]; //inclusive in ruby
a[i...j]; //exclusive in ruby
I assume ruby offers exclusive slicing, because having just inclusive slicing was
not considered quite sufficient.
(or alternatively, given that they also implemented start index+length, they just
wanted to prevent discussions like this one.)
ruby slices have value semantics and are therefore quite different from D slices.
> -disjoint array slices would have disjoint index ranges (imagine
> cases where consecutive slices overlap a bit, like in moving average
> applications)
That is a special case, usually you need disjoint slices. And furthermore:
// moving average of n consecutive elements
// as you stated it requires slices, I am providing a naive O(n^2) solution.
int n=...;
double[] a=...;
assert(a.length>=n);
auto avg = new double[](a.length-n+1);
// 1. right inclusive slicing
//foreach(i, ref x;avg) x = average(a[i..i+n-1]);
// 2. right exclusive slicing
foreach(i, ref x;avg) x = average(a[i..i+n]);
/* No comment */
> -now, the fact that "case X..Y" syntax will be allowable
That is an anti-reason because case X..Y: is flawed as explained above.
>
> I know that is implemented that way (and vala and python went with
> it). What would be the cost to replace those X..Y with X..(Y-1)?
> Aren't the above reasons worthy to consider such a change?
The above reasons are biased in that they do not mention any of the *benefits*
that the right-exclusive semantics bring.
Also, I personally think they are not valid reasons.
What would be the benefit of having to replace those X..Y with X..Y-1? It
certainly bears a cost, including that if you fail to do it, your program silently
breaks.
>
> Well, (almost) enough for now. I also maintain that unsigned types
> should throw out-of-range exceptions (in debug mode, so that release
> will run as fast as it gets) when decremented below zero, unless
> specifically marked as *circular* (i.e. intended behavior) or
> something like this. This will prevent some bugs. I see those quite
> often in my student's homeworks.
True, you seldom *want* an unsigned integer to underflow. I am not sure if it is
worth the slowdown though.
In practice, I think unsigned types are good for having access to all comparison
(and in C, shift) operators the hardware provides and for requiring less space
than long/cent/BigInt in large arrays if your values are positive and lie in
certain ranges. Not much more.
Is there a reason that those students use unsigned counter variables so often? Are
they programming for a 16 bit architecture?
Cheers,
-Timon
More information about the Digitalmars-d
mailing list