What is the point of nothrow?
wjoe
none at example.com
Wed Jun 13 02:02:54 UTC 2018
On Tuesday, 12 June 2018 at 18:41:07 UTC, Jonathan M Davis wrote:
> On Tuesday, June 12, 2018 17:38:07 wjoe via Digitalmars-d-learn
> wrote:
>> On Monday, 11 June 2018 at 00:47:27 UTC, Jonathan M Davis
>> wrote:
>> > On Sunday, June 10, 2018 23:59:17 Bauss via
>> > Digitalmars-d-learn
>> > wrote:
>> > Errors are supposed to kill the program, not get caught. As
>> > such, why does it matter if it can throw an Error?
>> >
>> > Now, personally, I'm increasingly of the opinion that the
>> > fact that we have Errors is kind of dumb given that if it's
>> > going to kill the program, and it's not safe to do clean-up
>> > at that point, because the program is in an invalid state,
>> > then why not just print the message and stack trace right
>> > there and then kill the program instead of throwing
>> > anything? But unforntunately, that's not what happens, which
>> > does put things in the weird state where code can catch an
>> > Error even though it shouldn't be doing that.
>>
>> Sorry for off topic but this means that I should revoke a
>> private key every time a server crashes because it's not
>> possible to erase secrets from RAM ?
>
> The fact that an Error was thrown means that either the program
> ran out of a resource that it requires to do it's job and
> assumes is available such that it can't continue without it
> (e.g. failed memory allocation) and/or that the program logic
> is faulty. At that point, the program is in an invalid state,
> and by definition can't be trusted to do the right thing. Once
If memory serves a failure to malloc in C can easily be checked
for success by comparing the returned pointer to null prior to
accessing it. If the pointer is null this only means that memory
allocation for the requested size failed. I fail to see how this
attempt at malloc could have corrupted the entire program state
invalid.
Why would it be inherently unsafe to free memory and try to
malloc again.
But maybe it's an optional feature and could just be disabled.
But maybe it does mean that the program cannot continue.
I still don't see a need to force quit without the opportunity to
decide whether it's an error to abort or an error that can be
fixed during run time.
> the program is in an invalid state, running destructors, scope
> statements, etc. could actually make things much worse. They
> could easily be operating on invalid data and do entirely the
> wrong thing. Yes, there are cases where someone could look at
could. Like erasing the hard drive ? But that could have happened
already. Could be the reason of the error, in the first place.
Destructors, scope statements etc. could also still work
flawlessly and it could become worse because of not exiting
gracefully. Data not synced to disk, rollback not executed, vital
shutdown commands omitted.
> what's happening and determine that based on what exactly went
> wrong, some amount of clean-up is safe, but without knowing
> exactly what went wrong and why, that's not possible.
>
But Errors have names, or codes. So it should be possible to
figure out what or why. No?
In case of an out of memory error, maybe the error could be
resolved by running the GC and retry.
I'm afraid I really can't grasp the idea why it's the end of the
world because an Error was thrown.
> And remember that regardless of what happens with Errors, other
> things can kill your program (e.g. segfaults), so if you want a
> robust server application, you have to deal with crashes
> regardless. You can't rely on your program always exiting
> cleanly or doing any proper clean-up, much as you want it to
> exit cleanly normally. Either way, if your program is crashing
it is possible to install a signal handler for almost every
signal on POSIX, including segfault. The only signal you can't
catch is signal 9 - sigkill if memory serves.
So I could for instance install a clean up handler on a segfault
via memset, or a for loop, and then terminate.
If program state, and not necessarily just my own programs but
any programs that store secrets in RAM, is to be considered
invalid on a thrown Error, and I cannot rely on proper clean up,
I must consider a secret leaked as soon as it is stored in RAM at
the same time an Error is thrown.
Therefore the only conclusion is that such a language is not safe
to use for applications that handle sensitive information, such
as encrypted email, digital signing, secure IM or anything that
requires secrets to perform it's work.
This is really sad because I want to think that improved
technology is actually better than it's precursors.
What I would hope for is a mechanism to help the developer to
safely handle these error conditions, or at least gracefully
terminate. Now I understand that nothing can be done about
program state actually messed up beyond repair and the program
terminated by the OS but just assuming all is FUBAR because of
throw Error is cheap.
> frequently enough that the lack of clean-up poses a real
> problem, then you have serious problems anyway. Certainly, if
> you're getting enough crashes that having to do something
> annoying like revoke a private key is happening anything but
> rarely, then you have far worse problems than having to revoke
> a private key or whatever else you might have to do because the
> program didn't shut down cleanly.
>
> - Jonathan M Davis
I can't know if the error was caused by accident or on purpose.
And I don't see how frequency of failure changes anything about
the fact. If a secret is left in RAM it can be read or become
included in a coredump. Whether it leaked the first time or not,
or not at all I wouldn't know but a defensive approach would be
to assume the worst case the first time.
Also it doesn't just relate to secrets not being cleaned up, but
I could imagine something like sending out a udp packet or a
signal on a pin or something alike to have external hardware stop
its operation. Emergency stop comes to mind.
Further, does it mean that a unitest should run each test case in
it's own process? Because an assert(false) for a not yet
implemented test case would render all further test cases
(theoretically) undefined, which would make the unitest{} blocks
rather useless ,too?
Sorry for off topic...
More information about the Digitalmars-d-learn
mailing list