Program logic bugs vs input/environmental errors

Walter Bright via Digitalmars-d digitalmars-d at puremagic.com
Sun Nov 2 19:28:28 PST 2014


On 11/2/2014 3:44 PM, Dicebot wrote:
>> They have hardware protection against sharing memory between processes. It's a
>> reasonable level of protection.
> reasonable default - yes
> reasoable level of protection in general - no

No language can help when that is the requirement.


>> 1. very few processes use shared memory
>> 2. those that do should regard it as input/environmental, and not trust it
>
> This is no different from:
>
> 1. very few threads use shared
> 2. those that do should regard is as input/environmental

It is absolutely different because of scale; having 1K of shared memory is very 
different from having 100Mb shared between processes including the stack and 
program code.


>>> and kernel mode code.
>>
>> Kernel mode code is the responsibility of the OS system, not the app.
>
> In some (many?) large scale server systems OS is the app or at least heavily
> integrated. Thinking about app as a single independent user-space process is a
> bit.. outdated.

Haha, I've used such a system (MSDOS) for many years. Switching to process 
protection was a huge advance. Sad that we're "modernizing" by reverting to such 
an awful programming environment.


>>> And you still don't have that possibility at thread/fiber
>>> level if you don't use mutable shared memory (or any global state in general).
>>
>> A buffer overflow will render all that protection useless.
>
> Nice we have @safe and default thread-local memory!

Assert is to catch program bugs that should never happen, not correctly 
functioning programs. Nor can D possibly guarantee that called C functions are safe.


>>> It is all about system design.
>>
>> It's about the probability of coupling and the level of that your system can
>> stand. Process level protection is adequate for most things.
>
> Again, I am fine with advocating it as a resonable default. What frustrates me
> is intentionally making any other design harder than it should be by explicitly
> allowing normal cleanup to be skipped. This behaviour is easy to achieve by
> installing custom assert handler (it could be generic Error handler too) but
> impossible to bail out when it is the default one.

Running normal cleanup code when the program is in an undefined, possibly 
corrupted, state can impede proper shutdown.


> Because of abovementioned avoiding more corruption from cleanup does not sound
> to me as strong enough benefit to force that on everyone.

I have considerable experience with what programs can do when continuing to run 
after a bug. This was on real mode DOS, which infamously does not seg fault on 
errors.

It's AWFUL. I've had quite enough of having to reboot the operating system after 
every failure, and even then that often wasn't enough because it might scramble 
the disk driver code so it won't even boot.

I got into the habit of layering in asserts to stop the program when it went 
bad. "Do not pass go, do not collect $200" is the only strategy that has a hope 
of working under such systems.


> I don't expect you to do magic. My blame is about making decisions that support
> designs you have great expertise with but hamper something different (but still
> very real) - decisions that are usually uncharacteristic in D (which I believe
> is non-opinionated language) and don't really belong to system programming
> language.

It is my duty to explain how to use the features of the language correctly, 
including how and why they work the way they do. The how, why, and best 
practices are not part of a language specification.


> I don't have other choice and I don't like it. It is effort consuming because it
> requires manually maintained exception hierarchy and style rules to keep errors
> different from exceptions - something that language otherwise provides to you
> out of the box. And there is always that 3d party library that is hard-coded to
> throw Error.
>
> It is not something that I realistically expect to change in D and there are
> specific plans for working with it (thanks for helping with it btw!). Just
> mentioning it as one of few D design decisions I find rather broken conceptually.

I hope to eventually change your mind about it being broken.


>> NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.
> As I have already mentioned it almost never can be truly reliable.

That's correct, but not a justification for making it less reliable.


> You simply
> call one higher reliability chance good enough and other lower one - disastrous.
> I don't agree this is the language call to make, even if decision is reasonable
> and fitting 90% cases.

If D changes assert() to do unwinding, then D will become unusable for building 
reliable systems until I add in yet another form of assert() that does not.


> This is really no different than GC usage in Phobos before @nogc push. If
> language decision may result in fundamental code base fragmentation (even for
> relatively small portion of users), it is likely to be overly opinionated decision.

The reason I initiated this thread is to point out the correct way to use 
assert() and to get that into the culture of best practices for D. This is 
because if I don't, then in the vacuum people will tend to fill that vacuum with 
misunderstandings and misuse.

It is an extremely important topic.


>> Running arbitrary cleanup code at this point is literally undefined behavior.
>> This is not a failure of language design - no language can offer any
>> guarantees about this.
> Some small chance of undefined behaviour vs 100% chance of resource leaks?

If the operating system can't handle resource recovery for a process 
terminating, it is an unusable operating system.


More information about the Digitalmars-d mailing list