Ada, SPARK [Was: Re: tolf and detab (language succinctness)]

bearophile bearophileHUGS at lycos.com
Wed Oct 6 14:48:02 PDT 2010


Bruno Medeiros:

>[About ADA] That "begin" "end <name of block>" syntax is awful. I already think just "begin" "end" syntax is bad, but also having to repeat the name of block/function/procedure/loop at the "end", that's awful.<

If you take a look at my dlibs1, probably more than 60_000 lines of D1 code, you may see that all closing braces of templates, functions, classes, and structs have a comment like:

} // end FooBar!()

Ada makes sure that comment at the end doesn't get out of sync with the true function mame :-) 

I agree it's not DRY, but when you aren't using a good IDE that comment helps you understand where you are in the code.

Ada is not designed to write Web applications, it's useful where you need very reliable software, high integrity systems. And in such situations it's very hard to beat it with other languages as C/C++/Java/C#. In such situations a language that helps you avoid bugs is better than a very handy language like Ruby. C language was not designed to write reliable systems, and it shows. D2 language distances itself from C, trying to be (useful to write code) more reliable than C, it uses some of the ideas of Ada (but D2 misses still a basic Ada feature important for highly reliable software systems: optional integral overflows, that in Ada are active on default).


Sometimes even Ada may not be enough. There is a strict subset of Ada named SPARK, that disallows some untidy Ada features (all Spark programs are legal Ada programs, but not all Ada programs are legal Spark programs). Spark has contracts that are verified statically:

http://en.wikipedia.org/wiki/SPARK_%28programming_language%29

Following some links I've found some example code, a SPARK implementation of the cryptographic algorithm Stein:

http://www.skein-hash.info/sites/default/files/spark-1.0.zip

That code (example "skein.adb") seems easy to read, it doesn't have too many comments, and something like 10-20% of the code are contracts that are verified statically (a small percentage of those contracts needs to be proved by hand). In SPARK functions need to be pure, and the end result was Stein code just about 10% slower than the C implementation (they have even found a little bug in the C reference implementation, and it was C code debugged to death).

You don't want to use Spark for web code, but if you want to write code to guide space probes or X-ray machines (http://en.wikipedia.org/wiki/Therac-25 ) you want to use a safer language. A cryptographic algorithm implementation must be critically bug-free, otherwise you risk losing important data in the de-cripting phase. A proof for the algorithm is not enough, the code too must be correct :-) As Walter has recently reminded us, proofs aren't enough, because engineering reality is complex, you also need watchdogs, various safeties, etc.

I have found few tutorial pages about Spark:
http://www.adacore.com/home/products/sparkpro/tokeneer/discovery/

Function calls always have the names of the arguments, this is also a Python feature that I have asked several times for D too, it helps avoid mistakes at the calling point:

   340                File.GetString( TheFile => TheFile,
   341                                Text    => LastTime,
   342                                Stop    => TimeCount);


The page about Contract programming in Spark is interesting:
http://www.adacore.com/home/products/sparkpro/tokeneer/discovery/lesson_contracts/

Curiously Spark has the feature I have suggested for D2 with the name @outer. A function that uses names from outer scopes must must specify what globals it uses and if they are in read/write/readwrite mode (I have just copied it here to not let this idea be forgotten: http://d.puremagic.com/issues/show_bug.cgi?id=5007 ):


>         procedure Inc (X : in out Integer);
>         ––# global in out CallCount;
>    
> The contract above specifies that the formal parameter X and global variable CallCount must be
> read by at least one path and must be updated by at least one path through the procedure.
> 
> SPARK static analysis will report an error if the implementation of the procedure:
> 
>     * does not read the variable CallCount on at least one path through the procedure; or
>     * does not write to the variable CallCount on at least one path through the procedure; or
>     * reads or updates any other global variables; or
>     * does not read the variable X on at least one path through the procedure; or
>     * does not write to the variable X on at least one path through the procedure.
> 
> SPARK contracts can specify, using postconditions, even more about the effects on
> the parameter X and global variable CallCount – see code below.
> 
>         procedure Inc (X : in out Integer);
>         ––# global in out CallCount;
>         ––# post X = X~ + 1 and
>         ––#      CallCount = CallCount~+ 1;
>    
> The postconditions specify that both variables are incremented by one. In SPARK, X~ refers
> to the initial value of X – similarly for CallCount.


The X~ syntax has purpose similar to the "old" of the Eiffel language and the OldValue() of C#4.


This is another interesting quotation:

> We can strengthen the post-condition (lines 775 and 776 below) to specify that only
> the entry indexed by CurrentLogFile is incremented and all other elements remain unchanged.
> 
>   772   ––# pre LogFileEntries(CurrentLogFile) < MaxLogFileEntries;
>   773   ––# post LogFileEntries = LogFileEntries~[CurrentLogFile =>
>                     LogFileEntries~(CurrentLogFile)+1];
   

I think the same post-condition may be written in D like (if and once D supports old), but it's not as nice as the Spark version, and it's not verified at statically:

out {
  assert(LogFileEntries == old.LogFileEntries[0 .. CurrentLogFile] ~ [old.LogFileEntries[CurrentLogFile]+1] ~ old.LogFileEntries[CurrentLogFile + 1 .. $]);
}

Bye,
bearophile


More information about the Digitalmars-d mailing list