Why does nobody seem to think that `null` is a serious problem in D?
Jonathan M Davis
newsgroup.d at jmdavisprog.com
Thu Nov 22 17:36:23 UTC 2018
On Wednesday, November 21, 2018 3:24:06 PM MST Johan Engelen via
Digitalmars-d-learn wrote:
> On Wednesday, 21 November 2018 at 07:47:14 UTC, Jonathan M Davis
>
> wrote:
> > IMHO, requiring something in the spec like "it must segfault
> > when dereferencing null" as has been suggested before is
> > probably not a good idea is really getting too specific
> > (especially considering that some folks have argued that not
> > all architectures segfault like x86 does), but ultimately, the
> > question needs to be discussed with Walter. I did briefly
> > discuss it with him at this last dconf, but I don't recall
> > exactly what he had to say about the ldc optimization stuff. I
> > _think_ that he was hoping that there was a way to tell the
> > optimizer to just not do that kind of optimization, but I don't
> > remember for sure.
>
> The issue is not specific to LDC at all. DMD also does
> optimizations that assume that dereferencing [*] null is UB. The
> example I gave is dead-code-elimination of a dead read of a
> member variable inside a class method, which can only be done
> either if the spec says that`a.foo()` is UB when `a` is null, or
> if `this.a` is UB when `this` is null.
>
> [*] I notice you also use "dereference" for an execution machine
> [**] reading from a memory address, instead of the language doing
> a dereference (which may not necessarily mean a read from memory).
> [**] intentional weird name for the CPU? Yes. We also have D code
> running as webassembly...
Skipping a dereference of null shouldn't be a problem as far as memory
safety goes. The issue is if the compiler decides that UB allows it do to
absolutely anything, and it rearranges the code in such a way that invalid
memory is accessed. That cannot be allowed in @safe code in any D compiler.
The code doesn't need to actually segfault, but it absolutely cannot access
invalid memory even when optimized.
Whether dmd's dead code elimination algorithm is able to make @safe code
unsafe, I don't know. I'm not familiar with dmd's internals, and in general,
while I have a basic understanding of the stuff at the various levels of a
compiler, once the discussion gets to stuff like machine instructions and
how the optimizer works, my understanding definitely isn't deep. After we
discussed this issue with regards to ldc at dconf, I brought it up with
Walter, and he didn't seem to think that dmd had such a problem, but I
didn't think to raise that particular possibility either. It wouldn't
surprise me if dmd also had issues in its optimizer that made @safe not
@safe, and it wouldn't surprise me if it didn't. It's the sort of area where
I'd expect that ldc's more aggressive optimizations to be much more likely
to run into trouble, and it's more likely to do things that Walter isn't
familiar with, but that doesn't mean that Walter didn't miss anything with
dmd either. After all, he does seem to like the idea of allowing the
optimizer to assume that assertions are true, and as far as I can tell based
on discussions on that topic, he doesn't seem to have understood (or maybe
just didn't agree) that if we did that, the optimizer can't be allowed to
make that assumption if there's any possibility of the code not being memory
safe if the assumption is wrong (at least not without violating the
guarantees that @safe is supposed to provide). Since if the assumption turns
out to be wrong (which is quite possible, even if it's not likely in
well-tested code), then @safe would then violate memory safety.
As I understand it, by definition, @safe code is supposed to not have
undefined behavior in it, and certainly, if any compiler's optimizer takes
undefined behavior as meaning that it can do whatever it wants at that point
with no restrictions (which is what I gathered from our discussion at
dconf), then I don't see how any D compiler's optimizer can be allowed to
think that anything is UB in @safe code. That may be why Walter was updating
various parts of the spec a while back to talk about compiler-defined as
opposed to undefined, since there are certainly areas where the compiler can
have leeway with what it does, but there are places (at least in @safe
code), where there must be restrictions on what it can assume and do even
when the implementation is given leeway, or @safe's memory safety guarantees
won't actually be properly guaranteed.
In any case, clearly this needs to be sorted out with Walter, and the D spec
needs to be updated in whatever manner best fixes the problem. Null pointers
/ references need to be guaranteed to be @safe in @safe code. Whether that's
going to require that the compiler insert additional null checks in at least
some places, I don't know. I simply don't know enough about how things work
with stuff like the optimizers, but it wouldn't surprise me if in at least
some cases, the compiler is ultimately going to be forced to insert null
checks. Certainly, at minimum, I think that it's quite clear that if a
platform doesn't segfault like x86 does, then it would have to.
- Jonathan M Davis
More information about the Digitalmars-d-learn
mailing list