Program logic bugs vs input/environmental errors
Dicebot via Digitalmars-d
digitalmars-d at puremagic.com
Sun Nov 2 15:44:37 PST 2014
On Sunday, 2 November 2014 at 17:53:45 UTC, Walter Bright wrote:
> On 11/2/2014 3:48 AM, Dicebot wrote:
>> On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via
>> Digitalmars-d wrote:
>> Which is exactly the statement I call wrong. With current OSes
>> processes aren't
>> decoupled units at all - it is all about feature set you stick
>> to. Same with any
>> other units.
>
> They have hardware protection against sharing memory between
> processes. It's a reasonable level of protection.
reasonable default - yes
reasoable level of protection in general - no
> 1. very few processes use shared memory
> 2. those that do should regard it as input/environmental, and
> not trust it
This is no different from:
1. very few threads use shared
2. those that do should regard is as input/environmental
>> and kernel mode code.
>
> Kernel mode code is the responsibility of the OS system, not
> the app.
In some (many?) large scale server systems OS is the app or at
least heavily integrated. Thinking about app as a single
independent user-space process is a bit.. outdated.
>> And you still don't have that possibility at thread/fiber
>> level if you don't use mutable shared memory (or any global
>> state in general).
>
> A buffer overflow will render all that protection useless.
Nice we have @safe and default thread-local memory!
>> It is all about system design.
>
> It's about the probability of coupling and the level of that
> your system can stand. Process level protection is adequate for
> most things.
Again, I am fine with advocating it as a resonable default. What
frustrates me is intentionally making any other design harder
than it should be by explicitly allowing normal cleanup to be
skipped. This behaviour is easy to achieve by installing custom
assert handler (it could be generic Error handler too) but
impossible to bail out when it is the default one.
Because of abovementioned avoiding more corruption from cleanup
does not sound to me as strong enough benefit to force that on
everyone. Ironically in system with decent fault protection and
safety redundance it won't even matter (everything it can
possibly corrupt is duplicated and proof-checked anyway)
>> Walter has experience with certain types of systems where
>> process is indeed most
>> appropriate unit of granularity and calls that a silver bullet
>> by explicitly
>> designing language
>
> I design the language to do what it can. A language cannot
> compensate for coupling and bugs in the operating system, nor
> can a language compensate for two machines being plugged into
> the same power circuit.
I don't expect you to do magic. My blame is about making
decisions that support designs you have great expertise with but
hamper something different (but still very real) - decisions that
are usually uncharacteristic in D (which I believe is
non-opinionated language) and don't really belong to system
programming language.
>> in a way that makes any other approach inherently complicated
>> and effort-consuming.
>
> Using enforce is neither complicated nor effort consuming.
> If you want code cleanup to happen, use enforce(). If you are
> using enforce() to detect programming bugs, well, that's your
> choice. enforce() isn't any more complicated or
> effort-consuming than using assert().
I don't have other choice and I don't like it. It is effort
consuming because it requires manually maintained exception
hierarchy and style rules to keep errors different from
exceptions - something that language otherwise provides to you
out of the box. And there is always that 3d party library that is
hard-coded to throw Error.
It is not something that I realistically expect to change in D
and there are specific plans for working with it (thanks for
helping with it btw!). Just mentioning it as one of few D design
decisions I find rather broken conceptually.
> The idea that asserts can be recovered from is fundamentally
> unsound, and makes D unusable for robust critical software.
Not "recovered" but "terminate user-defined portion of the
system".
> Asserts are for checking for programming bugs. A bug can be
> tripped because of a buffer overflow, memory corruption, a
> malicious code injection attack, etc.
>
> NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.
As I have already mentioned it almost never can be truly
reliable. You simply call one higher reliability chance good
enough and other lower one - disastrous. I don't agree this is
the language call to make, even if decision is reasonable and
fitting 90% cases.
This is really no different than GC usage in Phobos before @nogc
push. If language decision may result in fundamental code base
fragmentation (even for relatively small portion of users), it is
likely to be overly opinionated decision.
> Running arbitrary cleanup code at this point is literally
> undefined behavior. This is not a failure of language design -
> no language can offer any guarantees about this.
Some small chance of undefined behaviour vs 100% chance of
resource leaks?
Former can be more practical in many cases. And if it isn't for
specific application one can always install custom assert handler
that kills program right away. Don't see a deal breaker here.
More information about the Digitalmars-d
mailing list