Bad array indexing is considered deadly

Steven Schveighoffer via Digitalmars-d digitalmars-d at puremagic.com
Wed May 31 16:53:11 PDT 2017


On 5/31/17 7:13 PM, Moritz Maxeiner wrote:
> On Wednesday, 31 May 2017 at 22:47:38 UTC, Steven Schveighoffer wrote:
>>
>> Again, there has not been memory corruption.
>
> Again, the runtime *cannot* know that and hence you *cannot* claim that.
> It sees an index out of bounds and it *cannot* reason about whether a
> memory corruption has already occurred or not, which means it *must
> assume* the worst case (it must *assume* there was).

Yes, it cannot know at any point whether or not a memory corruption has 
occurred. However, it has a lever to pull to say "your program cannot 
continue, and you have no choice." It chooses to pull this lever on any 
attempt of out of bounds access of an array, regardless of the reason 
why that is happening. The chances that a memory corruption is the cause 
is so low, and it doesn't matter even if it is. The program may already 
have messed up everything by that point. In fact, the current behavior 
of printing the Error message and doing an orderly shutdown is pretty 
risky anyway if we think this is a memory corruption.

There are almost no other environmentally caused errors that cause this 
lever to be pulled. It doesn't make a whole lot of sense that it is.

>
>> There is a  confusion rampant in this thread that preventing
>> *attempted* memory corruption must mean there *is* memory corruption.
>
> No, please no. Nobody has written that in the entire thread even once!

"you have to assume that the index *being* out of bounds is itself the 
*result* of *already occurred* data corruption;"

> - An index being out of bounds is an error (lowercase!).
> - The runtime sees that error when the array is accessed (what you
> describe as *attemped* memory corruption.
> - The runtime does not know *why* the index is out of bounds
> It does *not* mean that there *was* memory corruption (and again, nobody
> claimed that), but the runtime cannot assume that there was not, because
> that is *unsafe*.

It's not the runtime's job to determine that the cause of an 
out-of-bounds access could be memory corruption. It's job is to prevent 
the current attempt. Throwing an Error accomplishes this, yes, but it 
also means you must shut down the program. I have no problem at all with 
it preventing the corruption, nor do I have a problem with it throwing 
an Error, per se. The problem I have is that throwing an Error itself 
corrupts the program, and makes it unusable. Therefore, it's the wrong 
tool for that job.

And I absolutely do not think that throwing an Error in this case was 
the result of a careful choice deciding that memory corruption must be 
or even might be the cause. I think it's this way because of the desire 
to write nothrow code without having to pepper your code with try/catch 
blocks.

>
>> One  does not require the other.
>
> Correct, but the runtime has to be safe in the *general* case, so it
> *must* assume the worst in case of a bug.

It's easy to prove as well that throwing an Exception instead of an 
Error is perfectly safe. My array wrapper is perfectly safe and does not 
throw an Error on bad indexing.

-Steve


More information about the Digitalmars-d mailing list