Why does nobody seem to think that `null` is a serious problem in D?
Jonathan M Davis
newsgroup.d at jmdavisprog.com
Wed Nov 21 07:47:14 UTC 2018
On Tuesday, November 20, 2018 11:04:08 AM MST Johan Engelen via Digitalmars-
d-learn wrote:
> On Tuesday, 20 November 2018 at 03:38:14 UTC, Jonathan M Davis
>
> wrote:
> > For @safe to function properly, dereferencing null _must_ be
> > guaranteed to be memory safe, and for dmd it is, since it will
> > always segfault. Unfortunately, as understand it, it is
> > currently possible with ldc's optimizer to run into trouble,
> > since it'll do things like see that something must be null and
> > therefore assume that it must never be dereferenced, since it
> > would clearly be wrong to dereference it. And then when the
> > code hits a point where it _does_ try to dereference it, you
> > get undefined behavior. It's something that needs to be fixed
> > in ldc, but based on discussions I had with Johan at dconf this
> > year about the issue, I suspect that the spec is going to have
> > to be updated to be very clear on how dereferencing null has to
> > be handled before the ldc guys do anything about it. As long as
> > the optimizer doesn't get involved everything is fine, but as
> > great as optimizers can be at making code faster, they aren't
> > really written with stuff like @safe in mind.
>
> One big problem is the way people talk and write about this
> issue. There is a difference between "dereferencing" in the
> language, and reading from a memory address by the CPU.
> Confusing language semantics with what the CPU is doing happens
> often in the D community and is not helping these debates.
>
> D is proclaiming that dereferencing `null` must segfault but that
> is not implemented by any of the compilers. It would require
> inserting null checks upon every dereference. (This may not be as
> slow as you may think, but it would probably not make code run
> faster.)
>
> An example:
> ```
> class A {
> int i;
> final void foo() {
> import std.stdio; writeln(__LINE__);
> // i = 5;
> }
> }
>
> void main() {
> A a;
> a.foo();
> }
> ```
>
> In this case, the actual null dereference happens on the last
> line of main. The program runs fine however since dlang 2.077.
> Now when `foo` is modified such that it writes to member field
> `i`, the program does segfault (writes to address 0).
> D does not make dereferencing on class objects explicit, which
> makes it harder to see where the dereference is happening.
Yeah. It's one of those areas where the spec will need to be clear. Like
C++, D doesn't actually dereference unless it needs to. And IMHO, that's
fine. The core issue is that operations that aren't memory safe can't be
allowed to happen in @safe code, and the spec needs to be defined in such a
way that requires that that be true, though not necessarily by being super
specific about every detail about how a compiler is required to do it.
> So, I think all compiler implementations are not spec compliant
> on this point.
> I think most people believe that compliance is too costly for the
> kind of software one wants to write in D; the issue is similar to
> array bounds checking that people explicitly disable or work
> around.
> For compliance we would need to change the compiler to emit null
> checks on all @safe dereferences (the opposite direction was
> chosen in 2.077). It'd be interesting to do the experiment.
Ultimately here, the key thing is that it must be guaranteed that
dereferencing null is @safe in @safe code (regardless of whether that
involves * or . and regardless of how that is achieved). It must never read
from or write to invalid memory. If it can, then dereferencing a null
pointer or class reference is not memory safe, and since there's no way to
know whether a pointer or class reference is null or not via the type
system, dereferencing pointers and references in general would then be
@system, and that simply can't be the case, or @safe is completely broken.
Typically, that protection is done right now via segfaults, but we know that
that's not always possible. For instance, if the object is large enough
(larger than one page size IIRC), then attempting to dereference a null
pointer won't necessarily segfault. It can actually end up accessing invalid
memory if you try to access a member variable that's deep enough in the
object. I know that in that particular case, Walter's answer to the problem
is that such objects should be illegal in @safe code, but AFAIK, neither the
compiler nor the spec have yet been updated to match that decision, which
needs to be fixed. But regardless, in any and all cases where we determine
that a segfault won't necessarily protect against accessing invalid memory
when a null pointer or reference is dereferenced, then we need to do
_something_ to guarantee that that code is @safe - which probably means
adding additional null checks in most cases, though in the case of the
overly large object, Walter has a different solution.
IMHO, requiring something in the spec like "it must segfault when
dereferencing null" as has been suggested before is probably not a good idea
is really getting too specific (especially considering that some folks have
argued that not all architectures segfault like x86 does), but ultimately,
the question needs to be discussed with Walter. I did briefly discuss it
with him at this last dconf, but I don't recall exactly what he had to say
about the ldc optimization stuff. I _think_ that he was hoping that there
was a way to tell the optimizer to just not do that kind of optimization,
but I don't remember for sure. Ultimately, the two of you will probably have
to discuss it. Either way, I know that he wanted a bugzilla issue on the
topic, but I keep forgetting about it. First, I need to at least dig through
the spec to figure out what it actually says right now, which probably isn't
much.
- Jonathan M Davis
More information about the Digitalmars-d-learn
mailing list