Program logic bugs vs input/environmental errors

Timon Gehr via Digitalmars-d digitalmars-d at puremagic.com
Tue Oct 7 18:18:00 PDT 2014


On 10/08/2014 02:27 AM, Walter Bright wrote:
> On 10/7/2014 2:12 PM, Timon Gehr wrote:
>> On 10/07/2014 10:09 PM, Walter Bright wrote:
>>> What defined behavior would you suggest would be possible after an
>>> overflow bug is detected?
>>
>> At the language level, there are many possibilities. Just look at what
>> type safe
>> languages do. It is not true that this must lead to UB by a "definition"
>> commonly agreed upon by participants in this thread.
>
> And even in a safe language, how would you know that a bug in the
> runtime didn't lead to corruption which put your program into the
> unknown state?
>
> Your assertion

Which assertion? That there are languages that call themselves type safe?

> rests on some assumptions:
>
> 1. the "safe" language doesn't have bugs in its proof or specification

So what? I can report these if present. That's not undefined behaviour, 
it is a wrong specification or a bug in the automated proof checker.
(In my experience however, the developers might not actually acknowledge 
that the specification violates type safety. UB in @safe code is a joke. 
But I am diverting.)

Not specific to our situation where we get an overflow.

> 2. the "safe" language doesn't have bugs in its implementation

So what? I can report these if present. That's not undefined behaviour, 
it is wrong behaviour.

Not specific to our situation where we get an overflow.

> 3. that it is knowable what caused a bug without ever having debugged it

Why would I need to assume this to make my point?

Not specific to our situation where we get an overflow

> 4. that program state couldn't have been corrupted due to hardware failures

Not specific to our situation where we detect the problem.

> 5. that it's possible to write a perfect system
>

You cannot disprove this one, and no, I am not assuming this, but it 
would be extraordinarily silly to write into the official language 
specification: "a program may do anything at any time, because a 
conforming implementation might contain bugs".

Also: Not specific to our situation where we detect the problem.

> all of which are false.
>
>
> I.e.

Why "I.e."?

> it is not possible to define the state of a program after it has
> entered an unknown state that was defined to never happen.

By assuming your 5 postulates are false, and filling in the 
justification for the "i.e." you left out, one will quickly reach the 
conclusion that it is not possible to define the behaviour of a program 
at all. Therefore, if we describe programs, our words are meaningless, 
because this is not "possible". This seems to quickly become a great 
example of the kind of black/white thinking you warned against in 
another post in this thread. It has to be allowed to use idealised 
language, otherwise you cannot say or think anything.

What is _undefined behaviour_ depends on the specification alone, and as 
flawed and ambiguous as that specification may be, in practice it will 
still be an invaluable tool for communication among language 
users/developers.

Can we at least agree that Dicebot's request for having the behaviour of 
inadvisable constructs defined such that an implementation cannot 
randomly change behaviour and then have the developers close down the 
corresponding bugzilla issue because it was the user's fault anyway is 
not unreasonable by definition because the system will not reach a 
perfect state anyway, and then retire this discussion?


More information about the Digitalmars-d mailing list