The extent of trust in errors and error handling

Ali Çehreli via Digitalmars-d digitalmars-d at puremagic.com
Wed Feb 1 11:25:07 PST 2017


tl;dr - Seeking thoughts on trusting a system that allows "handling" errors.

One of my extra-curricular interests is the Mill CPU[1]. A recent 
discussion in that context reminded me of the Error-Exception 
distinction in languages like D.

1) There is the well-known issue of whether Error should ever be caught. 
If Error represents conditions where the application is not in a defined 
state, hence it should stop operating as soon as possible, should that 
also carry over to other applications, to the OS, and perhaps even to 
other systems in the whole cluster?

For example, if a function detected an inconsistency in a DB that is 
available to all applications (as is the case in the Unix model of 
user-based access protection), should all processes that use that DB 
stop operating as well?

2) What if an intermediate layer of code did in fact handle an Error 
(perhaps raised by a function pre-condition check)? Should the callers 
of that layer have a say on that? Should a higher level code be able to 
say that Error should not be handled at all?

For example, an application code may want to say that no library that it 
uses should handle Errors that are thrown by a security library.

Aside, and more related to D: I think this whole discussion is related 
to another issue that has been raised in this forum a number of times: 
Whose responsibility is it to execute function pre-conditions? I think 
it was agreed that pre-condition checks should be run in the context of 
the caller. So, not the library, but the application code, should 
require that they be executed. In other words, it should be irrelevant 
whether the library was built in release mode or not, its pre-condition 
checks should be available to the caller. (I think we need to fix this 
anyway.)

And there is the issue of the programmer making the right decision: One 
person's Exception may be another person's Error.

It's fascinating that there are so many fundamental questions with CPUs, 
runtimes, loaders, and OSes, and that some of these issues are not even 
semantically describable. For example, I think there is no way of 
requiring that e.g. a square root function not have side effects at all: 
The compiler can allow a piece of code but then the library that was 
actually linked with the application can do anything else that it wants.

Thoughts? Are we doomed? Surprisingly, not seems to be as we use 
computers everywhere and they seem to work. :o)

Ali

[1] http://millcomputing.com/


More information about the Digitalmars-d mailing list