dereferencing null

H. S. Teoh hsteoh at quickfur.ath.cx
Tue Mar 6 18:37:16 PST 2012


On Tue, Mar 06, 2012 at 08:29:35PM -0500, Chad J wrote:
[...]
> But what do you say to the notion of isolation?  someFunc is
> isolated from riskyShenanigans becuase it /knows/ what state is
> touched by riskyShenanigans.  If riskyShenanigans does something
> strange and unexpected, and yes, it does have a bug in it, then I
> feel that someFunc should be able to reset the state touched by
> riskyShenanigans and continue.
>
> The thing I find really strange here is that there's this belief
> that if feature A is buggy then the unrelated feature B shouldn't
> work either. Why?  Shouldn't the user be able to continue using
> feature B?

If feature A is buggy and the user is trying to use it, then there's a
problem. If the user doesn't use feature A or knows that feature A is
buggy and so works around it, then feature A doesn't (shouldn't) run and
won't crash.


> Btw, crashing a program is bad.  That can lose data that the user
> has entered but not yet stored.  I should have a very good reason
> before I let this happen.

I don't know what your software design is, but when I write code, if
there is the possibility of data loss, I always make the program backup
the data at intervals. I don't trust the integrity of user data after a
major problem like dereferencing a null pointer happens. Obviously
there's a serious logic flaw in the program that led to this, so all
bets are off as to whether the user's data is even usable.


> It would also be extremely frustrating for a user to have a program
> become crippled because some feature they don't even use will
> occasionally dereference null and crash the thing.  Then they have
> to wait for me to fix it, and I'm busy, so it could be awhile.

The fact that the unused feature running even though the user isn't
using it is, to me, a sign that something like a null pointer
dereference should be fatal, because it means that what you assumed the
unused feature was doing before in the background was consistent, but it
turns out to be false, so who knows what else it has been doing wrong
before it hit the null pointer. I should hate for the program to
continue running after that, since consistency has been compromised;
continuing will probably only worsen the problem.


> My impression so far is that this hinges on some kind of "where
> there's one, there's more" argument.  I am unconvinced because
> programs tend to have bugs anyways.  riskyShenanigans doing a
> null-dereference once doesn't mean it's any more likely to produce
> corrupt results the rest of the time: it can produce corrupt results
> anyways, because it is a computer program written by a fallible
> human being.  Anyone trying to be really careful should validate the
> results in someFunc.

It sound like what you want is some kind of sandbox isolation function,
and null pointers are just the most obvious problem among other things
that could go wrong.

We could have a std.sandbox module that can run some given code (say
PossiblyBuggyFeatureA) inside a sandbox, so that if it dereferences a
null pointer, corrupts memory, or whatever, it won't affect
UnrelatedFeatureB which runs in a different sandbox, or the rest of the
system. This way you can boldly charge forward in spite of any problems,
because you know that only the code inside the sandbox is in a bad
state, and the rest of the program (presumably) is still in good working
condition.

In Linux this is easily implemented by fork() and perhaps chroot() (if
you're *really* paranoid) and message-passing (so the main program is
guaranteed to have no corruption even when BadPluginX goes crazy and
starts trashing memory everywhere). I don't know about Windows, but I
assume there is some way to do sandboxing as well.


T

-- 
Customer support: the art of getting your clients to pay for your own incompetence.


More information about the Digitalmars-d mailing list