Invariants are useless the way they are defined
Davidson Corry
davidsoncorry at gmail.com
Mon Aug 26 00:20:55 PDT 2013
On Monday, 26 August 2013 at 06:14:02 UTC, Ali Çehreli wrote:
> On 08/25/2013 05:16 AM, deadalnix wrote:
>
> > The problem is that invariant are checked at the
> > beginning/end on public function calls. As a consequence, it
> > is impossible to use any public
> > method in an invariant.
>
> That's a very interesting observation. Could the solution be
> running the invariant only once, at the outermost public
> function call? Hm... It would have to be a runtime feature
> then, right? Every public function would have calls to the
> invariant but those calls would have to be elided at runtime. I
> think...
>
> Here is another interesting observation: It is acceptable and
> quite normal that the object is in limbo state during a public
> member function. As a consequence, any function that operates
> on the object must use the object in a write-only fashion
> during that time frame. This is true even for non-member
> functions that the object is passed to. So, in theory, even a
> logging function cannot use the object. Hm...
If you will indulge a D newbie, lurker and former Eiffelist:
In design-by-contract theory, the invariant is part of the
definition of a user-defined type. If type T has an invariant, an
object t of that type is not a "real T" if it does not meet the
invariant while publicly accessible (that is, while it is not in
the hands of a member function of class T).
For instance, the purpose of the constructor is to establish the
invariant -- some Eiffelists argue that this is the *sole*
purpose of the constructor, and that constructors which perform
initialization beyond that are overstepping their bounds.
For another point, T t should be capable of being tested for its
invariant (and passing that test) anywhere and at any time it is
publicly accessible. It is only a matter of convenience that
implementations test the invariant at entry to and exit from
public member functions of class T. That convenience relies on
the guarantee that *only* T member functions can be allowed to
modify the state of object t -- all other operations in the
program must treat t as const (and can, of course, *rely* on t
being const)... which, in turn, has all kinds of implications
about what you can safely make public about a T object.
All of this has implications for D's contract guarantees. For
instance, you may not design an invariant that fails on T.init
(which is the state of t after you call t.clear() ). In other
words, D's object model doesn't necessarily match strict DbC
theory. And Jonathon Davis's notion of a t that "isn't really a
T" but should be acceptable by T.opAssign() also falls outside
the theory.
Offhand, I can think of two approaches that might address these
desires without too badly weakening the invariant's guarantee:
+ attach an internal "depth gauge" to each T t. At entry to any
public T member function, the depth gauge is incremented; at
exit, decremented. Invariant is tested *only* when the depth
gauge transitions from or to zero: that is, at its transition
from "surfaced" (public) to "submerged" (in the hands of T class
functions) or vice-versa. This would allow the implementation to
elide any invariant tests while T member functions call other
functions, even functions that are themselves public T member
functions.
* an attribute which tags a public T function as accepting
"broken Ts". Such a function would be expected to establish the
invariant in the object, and test that invariant, at exit, but
would *not* test the invariant at entry.
These two overlap but not entirely, and I'm still thinking about
whether you would want both, or just one or the other. As I say,
this is off the top of my head.
I hope I have contributed a useful notion, and not just muddied
the waters. Thanks for listening.
-- Dai
More information about the Digitalmars-d
mailing list