Program logic bugs vs input/environmental errors

Walter Bright via Digitalmars-d digitalmars-d at puremagic.com
Sun Nov 2 09:53:44 PST 2014


On 11/2/2014 3:48 AM, Dicebot wrote:
> On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via Digitalmars-d wrote:
> Which is exactly the statement I call wrong. With current OSes processes aren't
> decoupled units at all - it is all about feature set you stick to. Same with any
> other units.

They have hardware protection against sharing memory between processes. It's a 
reasonable level of protection.


>> If you go below that level of
>> granularity, you have the possibility of shared memory being corrupted
>> by one thread (or fibre, or whatever smaller than a process) affecting
>> the other threads.
> You already have that possibility at process level via shared process memory

1. very few processes use shared memory
2. those that do should regard it as input/environmental, and not trust it


> and kernel mode code.

Kernel mode code is the responsibility of the OS system, not the app.


> And you still don't have that possibility at thread/fiber
> level if you don't use mutable shared memory (or any global state in general).

A buffer overflow will render all that protection useless.


> It is all about system design.

It's about the probability of coupling and the level of that your system can 
stand. Process level protection is adequate for most things.


> Pretty much only reliably decoupled units I can imagine are processes running in
> different restricted virtual machines (or, better, different physical machines).
> Everything else gives just certain level of expectations.

Everything is coupled at some level. Again, it's about the level of reliability 
needed.


> Walter has experience with certain types of systems where process is indeed most
> appropriate unit of granularity and calls that a silver bullet by explicitly
> designing language

I design the language to do what it can. A language cannot compensate for 
coupling and bugs in the operating system, nor can a language compensate for two 
machines being plugged into the same power circuit.


> in a way that makes any other approach inherently complicated
> and effort-consuming.

Using enforce is neither complicated nor effort consuming.

The idea that asserts can be recovered from is fundamentally unsound, and makes 
D unusable for robust critical software. Asserts are for checking for 
programming bugs. A bug can be tripped because of a buffer overflow, memory 
corruption, a malicious code injection attack, etc.

NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.

Running arbitrary cleanup code at this point is literally undefined behavior. 
This is not a failure of language design - no language can offer any guarantees 
about this.

If you want code cleanup to happen, use enforce(). If you are using enforce() to 
detect programming bugs, well, that's your choice. enforce() isn't any more 
complicated or effort-consuming than using assert().




More information about the Digitalmars-d mailing list