Kernel buffer overflow exposes iPhone 11 Pro to radio based attacks

Bruce Carneal bcarneal at gmail.com
Wed Dec 9 17:21:57 UTC 2020


On Wednesday, 9 December 2020 at 16:34:28 UTC, H. S. Teoh wrote:
> On Wed, Dec 09, 2020 at 04:02:46PM +0000, Bruce Carneal via 
> Digitalmars-d wrote:
>> On Wednesday, 9 December 2020 at 13:28:14 UTC, Timon Gehr 
>> wrote:
>> > On 09.12.20 14:06, Paul Backus wrote:
>> > > On Wednesday, 9 December 2020 at 08:29:58 UTC, Bruce 
>> > > Carneal wrote:
>> > > > [...]
>> > > 
>> > > This does not work for extern(C) functions because their 
>> > > names are
>> > > not mangled.
>> > 
>> > It does not even work for extern(D) functions because their 
>> > return
>> > types are not mangled.
>> 
>> I did not know this.
>> 
>> If we lose information when we compile separately it's over.  
>> If you can't close the proof you've got no proof.
>
> This has nothing to do with separate compilation.  The C ABI 
> simply does not encode parameter or return types, needless to 
> say function attributes.  The D ABI does not encode return type 
> information, because doing so would imply overloading by return 
> type, which is not possible in D.

If you compile from source @safe can be enforced because the 
compiler has all the information it needs.  However things are 
represented in .o files and libs, if that representation does not 
give you the necessary information then you can't enforce @safe 
mechanically.

>
> You cannot "prove" anything in the way of proving it down to 
> the raw machine level.  At some point, you have to depend on 
> the consistency of the layer below you.  Just because you 
> mangle return types, does not prevent lower-level breakage of 
> your proof.  For example, you can declare a @safe function at 
> the D language level, but link it to a C function that just 
> happens to have the same name as the mangled D name, and you're 
> free to make this C function do whatever you want, and there's 
> nothing the compiler can do to enforce anything.  Just because 
> the mangled name says it's safe, does not guarantee it's safe.

I agree. If your information is probabilistic then you do not 
have a hard proof.

>
> Even if you somehow require the D compiler to take over the 
> entire process down to the executable, bypassing the system 
> linker, that still guarantees nothing.  I can use a hex editor 
> to modify the executable after-the-fact to break the @safe-ty 
> proof, and there's nothing the compiler or the language can do 
> about it.  Even if you somehow "protect" your executable, I can 
> run it on a modified OS that edits the program in-memory while 
> it's executing, and there's nothing the language can do about 
> it either.

No one can prove anything about a system that is modified after 
it leaves the proof domain.  Does anyone believe otherwise?

>
> Basically, the automatic verification of @safe, etc., relies on 
> the consistency of the lower-level layers. It does not "prove" 
> anything in the absolute sense.  You might be able to do that 
> if you invent your own hardware from ground up, starting from 
> the transistor level.  But you can't do that with a programming 
> language that's intended to run on a large variety of 
> preexisting systems.  Just like in mathematical proofs, you can 
> only prove down to the axioms, you cannot prove the axioms 
> themselves.  If the axioms are violated, your proof collapses.

Does anyone think otherwise?

> The guarantees of @safe only hold if certain assumptions about 
> the lower-level layers hold; however, there is nothing you can 
> do to guarantee that.  You cannot prove your axioms, you can 
> only assume them.

Again, I don't see this as an issue.  As you note, there is 
nothing any proof system can do when the "axioms" are violated. 
We're not trying to prove that all HW is correctly implemented, 
for example, or that it can withstand cosmic rays, or a power 
spike or, for that matter, that an arbitrarily patched binary has 
any properties at all.

I agree that, even within the compiled language domain, "proof" 
is probably better thought of as aspirational rather than 
literal.  There may be errors in the "proof": compiler 
implementation errors, language definition ambiguities/errors 
that were "correctly" implemented, and probably other modes of 
failure that I've not thought of.

However, I do not agree that because we can not prove everything 
we should stop trying to prove anything.

Finally, thanks for bringing up some of the hazards of using the 
word "proof" wrt computer systems.

>
>
> T




More information about the Digitalmars-d mailing list