The extent of trust in errors and error handling

Steve Biedermann via Digitalmars-d digitalmars-d at
Tue Feb 7 01:03:55 PST 2017

On Wednesday, 1 February 2017 at 19:25:07 UTC, Ali Çehreli wrote:
> tl;dr - Seeking thoughts on trusting a system that allows 
> "handling" errors.
> One of my extra-curricular interests is the Mill CPU[1]. A 
> recent discussion in that context reminded me of the 
> Error-Exception distinction in languages like D.
> 1) There is the well-known issue of whether Error should ever 
> be caught. If Error represents conditions where the application 
> is not in a defined state, hence it should stop operating as 
> soon as possible, should that also carry over to other 
> applications, to the OS, and perhaps even to other systems in 
> the whole cluster?
> For example, if a function detected an inconsistency in a DB 
> that is available to all applications (as is the case in the 
> Unix model of user-based access protection), should all 
> processes that use that DB stop operating as well?
> 2) What if an intermediate layer of code did in fact handle an 
> Error (perhaps raised by a function pre-condition check)? 
> Should the callers of that layer have a say on that? Should a 
> higher level code be able to say that Error should not be 
> handled at all?
> For example, an application code may want to say that no 
> library that it uses should handle Errors that are thrown by a 
> security library.
> Aside, and more related to D: I think this whole discussion is 
> related to another issue that has been raised in this forum a 
> number of times: Whose responsibility is it to execute function 
> pre-conditions? I think it was agreed that pre-condition checks 
> should be run in the context of the caller. So, not the 
> library, but the application code, should require that they be 
> executed. In other words, it should be irrelevant whether the 
> library was built in release mode or not, its pre-condition 
> checks should be available to the caller. (I think we need to 
> fix this anyway.)
> And there is the issue of the programmer making the right 
> decision: One person's Exception may be another person's Error.
> It's fascinating that there are so many fundamental questions 
> with CPUs, runtimes, loaders, and OSes, and that some of these 
> issues are not even semantically describable. For example, I 
> think there is no way of requiring that e.g. a square root 
> function not have side effects at all: The compiler can allow a 
> piece of code but then the library that was actually linked 
> with the application can do anything else that it wants.
> Thoughts? Are we doomed? Surprisingly, not seems to be as we 
> use computers everywhere and they seem to work. :o)
> Ali
> [1]

If you can recover from an error depends on the capabilities of 
the language and the guarantees it makes for errors.

If the language has no pointers and it gives you the guarantee, 
that no memory can be unintentionally overwritten in any other 
way, then you can recover from an error. Because you have the 
guarantee, that no memory corruption can happen.

If it's exactly specified, what happens when an error happens, 
you can decide if it's safe to continue. But for that you need to 
know exactly what the runtime does when this error is raised. If 
you aren't 100% sure what your state is, you shouldn't continue. 
(this matters more in life critical software, than in command 
line tools, but still...).

Or if you have a software stack like erlang, where you can just 
restart the failing process. In erlang it doesn't matter if it's 
an exception or an error. If a process fails, restart it and move 
on. This works, because processes are isolated and an error can't 
corrupt other processes.

So there are many approaches to this problem and all of them are 
a bit different. The final answer can only be, it depends on the 
language and the guarantees it makes. (And how much you trust the 
compiler to do the right thing 
[] :D)

More information about the Digitalmars-d mailing list