Why don't other programming languages have ranges?

Don nospam at nospam.com
Tue Aug 3 07:29:47 PDT 2010


retard wrote:
> Tue, 03 Aug 2010 10:43:44 +0200, Don wrote:
> 
>> Jim Balter wrote:
>>> "Jonathan M Davis" <jmdavisprog at gmail.com> wrote in message
>>> news:mailman.50.1280425209.13841.digitalmars-d at puremagic.com...
>>>> On Thursday, July 29, 2010 03:11:21 Don wrote:
>>>>> I have to disagree with that. "Correctness proofs" are based on a
>>>>> total fallacy. Attempting to proving that a program is correct (on a
>>>>> real machine, as opposed to a theoretical one) is utterly ridiculous.
>>>>> I'm genuinely astonished that such an absurd idea ever had any
>>>>> traction.
>>>> It's likely because the idea of being able to _prove_ that your
>>>> program is
>>>> correct is rather seductive. As it is, you never know for sure whether
>>>> you found
>>>> all of the bugs or not. So, I think that it's quite understandable
>>>> that the
>>>> academic types at the least have been interested in it. I'd also have
>>>> expected
>>>> folks like Boeing to have some interest in it. However, since it's so
>>>> time
>>>> consuming, error prone, and ultimately not enough even if you do it
>>>> right, it
>>>> just doesn't make sense to do it in most cases, and it's never a good
>>>> idea to
>>>> rely on it.
>>>>
>>>> Given how computer science is effectively applied mathematics, I don't
>>>> think that
>>>> it's at all surprising or ridiculous that correctness proofs have been
>>>> tried (if
>>>> anything, it would be totally natural and obvious to a lot of math
>>>> folks), but I
>>>> do think that that it's fairly clear that they aren't really a
>>>> solution for a
>>>> whole host of reasons.
>>>>
>>>> - Jonathan M Davis
>>> You've snipped the context; if you go back and look at what Don was
>>> disagreeing with, it was simply that correctness proofs are *one tool
>>> among many* for increasing program reliability -- no one claimed that
>>> they are "a solution". The claim that they are "based on a total
>>> fallacy" is a grossly ignorant.
>> No, I was arguing against the idea that they are a total solution. They
>> were presented as such in the mid-80's. The idea was that you developed
>> the program and its proof simultaneously, "with the proof usually
>> leading the way". If you have proved the program is correct, of course
>> you don't need anything else to improve reliability. I see this as a
>> very dangerous idea, which was I think Walter's original point: you
>> cannot completely trust ANYTHING.
> 
> The issues Walter mentioned are almost orthogonal to traditional proofs. 
> You can prove some upper bounds for resource usage or guarantee that the 
> software doesn't crash or hang _on an ideal model of the computer_. 

Exactly.

If
> you have two memory modules and shoot one of them to pieces with a double-
> barreled shotgun or M16, proving the correctness of a quicksort doesn't 
> guarantee that your sorted data won't become corrupted. What you need is 
> fault tolerant hardware and redundant memory modules.
> 
> The reason why DbC, invariants, unit tests, and automated theorem provers 
> exist is that code written by humans typically contains programming 
> errors. No matter how perfect the host hardware is, it won't save the day 
> if you cannot write correct code. Restarting the process or having 
> redundant code paths & processors won't help if every version of the code 
> contains bugs (for instance doesn't terminate in some rare cases).

Hence Walter's assertion, that the most effective way to deal with this 
is to have _independent_ redundant systems. Having two identical copies 
doesn't improve reliability much. If you have two buggy pieces of code, 
but they are completely independent, their bugs will manifest on 
different inputs. So you can achieve extremely high reliability even on 
code with a very high bug density.

I find it particularly interesting that the Pentium FDIV flaw was 
detected by comparison of Pentium with 486 -- even though the comparison 
was made in order to detect software bugs. So it is a method of 
improving total system reliability.

I've not heard this idea applied to software before. (Though I vaguely 
remember a spacecraft with 3 computers, where if one produced different 
results from the other two, it was ignored. Not sure if the processors 
used different algorithms, though).


>> I'm of course strongly in favour of static analysis tools.
> 
> These are specialized theorem provers. Even the static type checker 
> solves a more or less simple constraint satisfaction problem. These use 
> simple logic rules like the modus ponens.

Indeed.


More information about the Digitalmars-d mailing list