Concept proposal: Safely catching error

Olivier FAURE via Digitalmars-d digitalmars-d at puremagic.com
Thu Jun 8 02:27:55 PDT 2017


On Wednesday, 7 June 2017 at 19:45:05 UTC, ag0aep6g wrote:
> You gave the argument against catching out-of-bounds errors as: 
> "it means an invariant is broken, which means the code 
> surrounding it probably makes invalid assumptions and shouldn't 
> be trusted."
>
> That line of reasoning applies to @trusted code. Only @trusted 
> code can lose its trustworthiness. @safe code is guaranteed 
> trustworthy (except for calls to @trusted code).

To clarify, when I said "shouldn't be trusted", I meant in the 
general sense, not in the memory safety sense.

I think Jonathan M Davis put it nicely:

On Wednesday, 31 May 2017 at 23:51:30 UTC, Jonathan M Davis wrote:
> Honestly, once a memory corruption has occurred, all bets are 
> off anyway. The core thing here is that the contract of 
> indexing arrays was violated, which is a bug. If we're going to 
> argue about whether it makes sense to change that contract, 
> then we have to discuss the consequences of doing so, and I 
> really don't see why whether a memory corruption has occurred 
> previously is relevant. [...] In either case, the runtime has 
> no way of determining the reason for the failure, and I don't 
> see why passing a bad value to index an array is any more 
> indicative of a memory corruption than passing an invalid day 
> of the month to std.datetime's Date when constructing it is 
> indicative of a memory corruption.

The sane way to protect against memory corruption is to write 
safe code, not code that *might* shut down brutally onces memory 
corruption has already occurred. This is done by using @safe and 
proofreading all @trusted functions in your libs.

Contracts are made to preempt memory corruption, and to protect 
against *programming* errors; they're not recoverable because 
breaking a contract means that from now on the program is in a 
state that wasn't anticipated by the programmer.

Which means the only way to handle them gracefully is to cancel 
what you were doing and go back to the pre-contract-breaking 
state, then produce a big, detailed error message and then exit / 
remove the thread / etc.

>> I think the issue of @trusted is tangential to this. If you 
>> (or the writer of a library you use) are using @trusted to 
>> cast away pureness and then have side effects, you're already 
>> risking data corruption and undefined behavior, catching 
>> Errors or no catching Errors.
>
> The point is that an out-of-bounds error implies a bug 
> somewhere. If the bug is in @safe code, it doesn't affect 
> safety at all. There is no explosion. But if the bug is in 
> @trusted code, you can't determine how large the explosion is 
> by looking at the function signature.

I don't think there is much overlap between the problems that can 
be caused by faulty @trusted code and the problems than can be 
caught by Errors.

Not that this is not a philosophical problem. I'm making an 
empirical claim: "Catching Errors would not open programs to 
memory safety attacks or accidental memory safety blunders that 
would not otherwise happen".

For instance, if some poorly-written @trusted function causes the 
size of an int[10] slice to be registered as 20, then your 
program becomes vulnerable to buffer overflows when you iterate 
over it; the buffer overflow will not throw any Error.

I'm not sure what the official stance is on this. As far as I'm 
aware, contracts and OOB checks are supposed to prevent memory 
corruption, not detect it. Any security based on detecting 
potential memory corruption can ultimately be bypassed by a 
hacker.


More information about the Digitalmars-d mailing list