Do we really need const?

Bill Baxter dnewsgroup at billbaxter.com
Mon Sep 17 18:20:24 PDT 2007


Robert Fraser wrote:
> Lionello Lunesu Wrote:
> 
> There seems to be a general push (among many computer scientists) to
> enforce stricter rules, yet some of the most successful languages in
> the past few years have been dynamically/duck typed. 

Note however that as these languages mature people are gradually trying 
to put some notion of interface-checking back in.  I only really know 
about python, but there we have pyprotocols 
(http://peak.telecommunity.com/PyProtocols.html) and zope.interface 
(http://www.zope.org/Products/ZopeInterface) that both aim to put some 
non-duck type checking features back into the language.  Because, 
surprise, when you're scaling up to huge systems it becomes difficult to 
figure out exactly what kind of duck you're supposed to be passing.

> I'd like to pose a question to those who have used C++'s const: do
> you feel that it has saved more time by preventing bugs than it has
> taken by being forced to type it all the time, and the time spent
> when it has to be removed all throughout a hierarchy, as inevitably
> has to happen at least once? That is, const-correctness is a time
> investment, so do you feel that investment has paid off for you?

That's kind of why I started this thread.  I started using C++ in 1995 
and it has been my main language since about 1998.  I'm very used to 
C++'s const.  But I do vaguely recall finding it terribly annoying in 
1995 when I started moving over from C.  The C++ people kept telling me 
that const correctness was like eating your peas.  "You may not like it, 
but it's good for you."  And now I like const, just like I like eating 
peas now too.  It's not that I prefer the taste of peas to chocolate ice 
cream, but if I don't eat my peas I get this feeling like my health may 
fall apart at any minute.  It's the same feeling I get from not using const.

That said, I can program in Python without const and not flinch at all. 
  Because const correctness is just not a part of Python.  There are no 
peas in Python-land so I don't feel like I have to eat them.  It's ice 
cream for every meal!  Of course in Python-land "slow" has also been 
declared the new "fast", so I also don't flinch about making heap 
allocations willy-nilly.

> I can say that working in Java, I have _never_ felt that if I pass a
> class reference that was "constant" in nature to a method written by
> somebody else or even to entirely different subsystem, that the
> invariantness contract, specified only in the documentation, would be
> broken. Compiler checks in that case end up being as useless and
> annoying as checked exceptions.

Here's the one case that makes me want const:  Efficient vector math. 
Say you're writing a routine to compute whether or not three points are 
inside the circumcircle defined by three others.  Here's how I wrote it 
in D:

     // Return if point d is in the circumcircle defined by points a,b,c.
     // The points a,b,c should be given in counter-clockwise order.
     bool in_circle(Point a, Point b, Point c, ref Point d)
     {
         a -= d;
         b -= d;
         c -= d;
         assert(is_CCW(a,b,c),
                "input circle points are not in ccw order");

         Scalar a2 = a.sqrnorm();
         Scalar b2 = b.sqrnorm();
         Scalar c2 = c.sqrnorm();

         Scalar det = a2*(b.x*c.y - b.y*c.x)
                + b2*(a.y*c.x - a.x*c.y)
                + c2*(a.x*b.y  - a.y*b.x);
         return det >= 0;
     }

Really I would like to make 'ref Point d' there be const.  I'm passing 
it by reference because it's slightly more efficient and in_circle can 
get called a *lot*.  I go ahead and pass a,b,c by value because I'm 
going to have to push a copy on the stack to modify them anyway.  If I 
weren't modifying them I'd pass them by reference, too.

But those are implementation details that the caller of in_circle 
doesn't really care about.  So it's odd they should be in the interface.

Ref/const ref is not a very direct solution for this kind of need. 
What i'd really like to be able to do is declare the function to be 
generically non-mutating (like 'in'), but at the same time time
a) allow the implementation to modify its arguments if it wants to and
b) make the call using the most efficient mechanism possible.  I don't 
really want to have to guess whether the argument is big enough to 
justify pass by reference or not.  It probably depends a lot on the 
architecture and number of accesses to the variable actually made in the 
end.

Just using plain 'ref' would maybe be an acceptable solution, except you 
can't pass literals by ref.  So if you make a min template like  "T 
min(T)(ref T a, ref T b) {...};"   min(0,x) won't compile.  That's just 
too useful to disallow.

And then if T is a class type then it's already a reference so there's 
not need to take a reference to the reference.

You can work around these things with lots of static if(is(T==class)) 
type things, but it gets ugly fast.  It would be great if something like

    T min(T)(in T a, in T b) {...}

"just worked".  I.e. prevented visible modifications to a,b but didn't 
do unnecessary, inefficient copying of big structures, and didn't take 
unnecessary references of arguments that are already references, and 
allowed calling with literals.

Give me a way to do that and I'll be happy.  I'd even be willing to 
ditch the 'prevent modifications' bit as long as I can have efficiency 
and the ability to pass constants.

--bb



More information about the Digitalmars-d mailing list