C `restrict` keyword in D

Jonathan M Davis via Digitalmars-d digitalmars-d at puremagic.com
Tue Sep 5 15:59:12 PDT 2017


On Tuesday, September 05, 2017 18:32:34 Johan Engelen via Digitalmars-d 
wrote:
> On Monday, 4 September 2017 at 21:23:50 UTC, Moritz Maxeiner
>
> wrote:
> > On Monday, 4 September 2017 at 17:58:41 UTC, Johan Engelen
> >
> > wrote:
> >> (The spec requires crashing on null dereferencing, but this
> >> spec bit is ignored by DMD and LDC, I assume in GDC too.
> >> Crashing on `null` dereferencing requires a null-check on
> >> every dereferencing through an unchecked pointer, because 0
> >> might be a valid memory access, and also because
> >> ptr->someDataField is not going to lookup address 0, but
> >> 0+offsetof(someDataField) instead, e.g. potentially addressing
> >> a valid low address at 1000000, say.)
> >
> > It's not implemented as compiler checks because the "actual"
> > requirement is "the platform has to crash on null dereference"
> > (see the discussion in/around [1]). Essentially: "if your
> > platform doesn't crash on null dereference, don't use D on it
> > (at the very least not @safe D)".
>
> My point was that that is not workable. The "null dereference" is
> a D language construct, not something that the machine is doing.
> It's ridiculous to specify that reading from address 1_000_000
> should crash the program, yet that is exactly what is specified
> by D when running this code (and thus null checks need to be
> injected in many places to be spec compliant):
>
> ```
> struct S {
>    ubyte[1_000_000] a;
>    int b;
> }
> void main() {
>     S* s = null;
>     s.b = 1;
> }
> ```

dmd and the spec were written with the assumption that the CPU is going to
segfault your program when you dereference a null pointer. In the vast
majority of cases, that assumption holds. The problem of course is the case
that you bring up where you're dealing with objects that are large enough
that the CPU can't do that anymore. And as Moritz points out, all that's
required to fix that is to insert null checks for those types. It shouldn't
be necessary at all for the vast majority of types. The CPU already handles
them correctly - at least on any x86-based system. I would expect any other
modern CPU to do the same, but I'm not familiar enough with other such
systems to know for sure. Regardless, there definitely should be no need to
insert null checks all over the place in any x86-based code. At most, it's
needed in a few places to deal with abnormally large objects.

Regardless, for @safe to do its job, the program does need to crash when
dereferencing null. So, if the CPU can't do the checks like the spec
currently assumes, then the compiler is going to need to insert the checks,
and while that may hurt performance, I don't think that there's really any
way around that while still ensuring that @safe code does not corrupt memory
or access memory that it's not supposed to. @system code could skip it to
get the full performance, but @safe is stuck.

- Jonathan M Davis



More information about the Digitalmars-d mailing list