[OT] Windows dying

H. S. Teoh hsteoh at quickfur.ath.cx
Thu Nov 2 15:44:27 UTC 2017


On Thu, Nov 02, 2017 at 09:16:02AM +0000, Dave Jones via Digitalmars-d wrote:
> On Thursday, 2 November 2017 at 08:59:05 UTC, Patrick Schluter wrote:
> > On Thursday, 2 November 2017 at 06:28:52 UTC, codephantom wrote:
> > > 
> > > But Ken Thompson summed it all up nicely: "You can't trust code
> > > that you did not totally create yourself."
> > 
> > Even that is wrong. You can trust code you create yourself only if
> > it was reviewed by others as involved as you. I do not trust the
> > code I write. The code I write is generally conforming to the
> > problem I think it solves. More than once I was wrong on my
> > assumptions and therefore my code was wrong, even if perfectly
> > implemented.
> 
> He means trust in the sense that there's no nefarious payload hidden
> in there, not that it works properly.
[...]

Sometimes the line is blurry, though.  OpenSSL with the Heartbleed bug
has no nefarious payload -- but I don't think you could say you "trust"
it.  Trust is a tricky thing to define.

But more to the original point: Thompson's article on trusting trust
goes deeper than mere code.  The real point is that ultimately, you have
to trust some upstream vendor "by faith", as it were, because if you
want to be *really* paranoid, you'll have to question not only whether
your compiler comes with a backdoor of the kind Thompson describes in
the article, but also whether there's something nefarious going on with
the *hardware* your code is running on.  I mean, these days, CPUs come
with microcode, so even if you had access to a known-to-be-uncompromised
disassembler and reviewed the executable instruction by instruction, in
a philosophical sense you *still* cannot be sure that when you hand this
machine code to the CPU, it will not do something nefarious. What if the
microcode was compromised somewhere along the line?  And even if you
could somehow review the microcode and verify that it doesn't do
anything nefarious, do you really trust that the CPU manufacturer hasn't
modified some of the CPU design circuitry to do something nefarious? You
can review the VLSI blueprints for the CPU, but how do you know the
factory didn't secretly modify the hardware?  If you *really* wish to be
100% sure about anything, you'll have to use a scanning electron
microscope to verify that the hardware actually does what the
manufacturer says it does and nothing else.

(Not to mention, even if you *were* able to review every atom of your
CPU to be sure it does what it's supposed to and nothing else, how do
you know your hard drive controller isn't somehow compromised to deliver
a different, backdoored version of your code when you run the
executable, but deliver the innocent reflection of the source code when
you're reviewing the binary? So you'll have to use the electron
microscope on your HD controller too. And the rest of your motherboard
and everything else attached to it.)

Of course, practically speaking, somewhere on the line between reviewing
code and using an electron microscope (and even in the latter case, one
has to question whether the microscope manufacturer inserted something
nefarious to hide a hardware exploit -- so you'd better build your own
electron microscope from ground up), there is a line where you'd just
say, OK, this is good enough, I'll just have to take on faith that below
this level, everything works as advertised.  Otherwise, you'd get
nothing done, because nobody has a long enough lifetime, nor patience,
nor the requisite knowledge, to review *everything* down to the
transistor level.  Somewhere along the line you just have to stop and
take on faith that everything past that point isn't compromised in some
way.

And yes, I said and meant take on *faith* -- because even peer review is
a matter of faith -- faith that the reviewers don't have a hidden agenda
or are involved in secret collusions to push some agenda. It's very
unlikely to happen in practice, but you can't be *sure*. And that's the
point Thompson was getting at.  You have to build up trust from
*somewhere* other than ground zero.  And because of that, you should, on
the other hand, always be prepared to mitigate unexpected circumstances
that may compromise the trust you've placed in something.  Rather than
becoming paranoid and locking yourself in a Faraday cage inside an
underground bunker, isolated from the big bad world, and building
everything from scratch yourself, you decide to what level to start
building your trust on, and prepare ways to mitigate problems when it
turns out that what you trust wasn't that trustworthy after all.

So if you want to talk about trust, open source code is only the tip of
the iceberg.  The recent fiasco about buggy TPM chips generating
easily-cracked RSA keys is ample proof of this.  Your OS may be fine,
but when it relies on a TPM chip that has a bug, you have a problem. And
this is just a *bug* we're talking about.  What if it wasn't a bug, but
a deliberate backdoor inserted by the NSA or some agency with an
ulterior motive?  Your open source OS won't help you here. And yes, the
argument has been made that if only the TPM code were open source, the
bug would have been noticed. But again, that depends. Just because the
code is open source doesn't guarantee it's getting the attention it
needs. And even if it is, there's always the question of whether the
hardware it's running on isn't compromised.  At *some* point, you just
have to draw the line and take things on faith, otherwise you have no
choice but to live in a Faraday cage inside an underground bunker.


T

-- 
2+2=4. 2*2=4. 2^2=4. Therefore, +, *, and ^ are the same operation.


More information about the Digitalmars-d mailing list