Fantastic exchange from DConf

Moritz Maxeiner via Digitalmars-d digitalmars-d at puremagic.com
Fri May 19 18:53:02 PDT 2017


On Friday, 19 May 2017 at 23:56:55 UTC, Dominikus Dittes Scherkl 
wrote:
> On Friday, 19 May 2017 at 22:06:59 UTC, Moritz Maxeiner wrote:
>> On Friday, 19 May 2017 at 20:54:40 UTC, Dominikus Dittes 
>> Scherkl wrote:
>
>>> I take this to mean the programmer who wrote the library, not 
>>> every user of the library.
>>
>> I take this to mean any programmer that ends up compiling it 
>> (if you use a precompiled version that you only link against 
>> that would be different).
> Why? Because you don't have the source? Go, get the source - at 
> least for open source projects this should be possible. I can's 
> see the difference.

Because imo the specification statement covers compiling @trusted 
code, not linking to already compiled @trusted code; so I 
excluded it from my statement.
Whether you can get the source is not pertinent to the statement.

>
>>> Hm - we should have some mechanism to add to some list of 
>>> people who already trust the code because they checked it.
>>
>> Are you talking about inside the code itself?
>> If so, I imagine digitally signing the functions source code 
>> (ignoring whitespace) and adding the signature to the 
>> docstring would do it.
> Yeah. But I mean: we need such a mechanism in the D review 
> process. It would be nice to have something standardized so 
> that if I checked something to be really trustworthy, I want to 
> make it public, so that
> everybody can see who already checked the code - and maybe 
> concentrate on reviewing something that was yet not reviewed by 
> many people or not by anybody they trust most.

Well, are you self-electing yourself to be the champion of this? 
Because I don't think it will happen without one.

>
>>[the compiler] only knows you (the programmer who invoked it), 
>>as the
>> one it extends trust to.
> The compiler "trusts" anybody using it. This is of no value.

The compiler extends trust to whoever invokes it, that is correct 
(and what I wrote).
That person then either explicitly, or implicitly manages that 
trust further.
You can, obviously, manage that trust however you see fit, but 
*I* will still consider it negligence if you - as the author of 
some application - have not verified all @trusted code you use.

> The important thing is who YOU trust. Or who you want the user 
> of your program to trust.
> Oftentimes it may be more convincing to the user of your 
> program if you want them to trust company X where you bought 
> some library from than trusting in your own ability to prove 
> the memory safety of the code build upon this - no matter if 
> you compiled the library yourself or have it be done by company 
> X.

And MY trust is not transitive. If I trust person A, and A trusts 
person B, I would NOT implicitly trust person B. As such, if A 
wrote me a @safe application that uses @trusted code written by B 
and A would tell me that he/she/it had not verified B's code, I 
would consider A to be negligent.

>
>> I am specifically not talking about what is legally your fault 
>> or not, because I consider that an entirely different matter.
> Different matter, but same chain of trust.
>
>>> nobody can check everything or even a relevant portion of it.
>> That entirely depends on how much @trusted code you have.
> Of course.
> But no matter how glad I would be to be able to check e.g. my 
> operating system for memory safety, and even if it would be 
> only 1% of its code that is merely @trusted instead of @safe, 
> it would still be too much for me.
> This is only feasible if you shrink you view far enough.

What you call shrinking your view information sciences call using 
the appropriate abstraction for the problem domain: It makes 
absolutely no sense whatsoever to even talk about the memory 
safety of a programming language if the infrastructure below it 
is not excluded from the view:

If you do not use the appropriate abstraction you can always say: 
High enough radiation can always just randomly flip bits in your 
computer and you're screwed, memory safe does not exist. That, 
while true in practice, is of no help when designing applications 
that will run on computers not exposed to excessive radiation so 
you exclude it.

>
> And reverse: the more code is @safe the further I can expand my 
> checking activities - but I still don't believe to ever being 
> able to check everything.

Again: I specifically wrote about @trusted code, not all code.

>
>> I specifically stated reviewing any @trusted code, not all 
>> code.
> Yes. Still too much, I think.

I do not.

>
>> I agree in principal, but the statement I responded to was "D 
>> is memory safe", which either does or does not hold.
> And I say: No, D is not memory safe. In practice. Good, but no 
> 100%.
>
>> I also believe that considering the statement's truthfulness 
>> only makes sense under the assumption that nothing *below* it 
>> violates that, since the statement is about language theory.
> Ok, this is what I mean by "shrinking your view until it's 
> possible
> to check everything" - or being able to prove something in this 
> case.
> but by doing so you also neglect things. Many things.

As stated above, this is choosing the appropriate abstraction for 
the problem domain. I know what can go wrong in the lower layers, 
but all of that is irrelevant to the problem domain of a 
programming language. It only becomes relevant again when you ask 
"Will my @safe D program (which is memory safe, I verified all 
@trusted myself!) be memory safe when running on this specific 
system?", to which the generic answer is "it depends".

>
>> Of course anyone can choose to check whatever they wish. That 
>> does not change what *I* consider negligent.
> But neglecting is a necessity.

It may be a necessity for you (and I personally assume probably 
even most programmers), but that does not make it generally true.

>
>> In this context: It is one thing to be negligent (and I 
>> explicitly do not claim *not* to be negligent myself), but 
>> something completely different to pretend that being negligent 
>> is OK.
> It's not only ok. It's a necessity.

First, something being a necessity does not make it OK. Second: 
Whether it is a necessity is once more use case dependent.

> The necessity of a limited being in an infinite universe.

The amount of @trusted code is limited and thus the time needed 
to review it is as well.


More information about the Digitalmars-d mailing list