Why use a DFA instead of DIP1000?

Richard Andrew Cattermole (Rikki) richard at cattermole.co.nz
Sat Sep 13 02:39:40 UTC 2025


Yesterday (or today, depending upon who you ask), Dennis asked a 
wonderful question in the monthly meeting.

Paraphrased, why should we use a DFA engine instead of DIP1000?

And you know what, this is a great question, especially since 
DIP1000 is a subset of a specialised DFA engine. But with some 
serious ceiling limitations.

Afterwards, I came up with an example, where DIP1000 will error:

```d
int* ptr;

void func(bool b, scope int* p) @safe {
   assert(!b);

   if (b) {
     ptr = p; // Error: scope variable `p` assigned to global 
variable `ptr`
   }
}
```

This is clearly a false positive; that branch could never run!

One of the hallmarks of a real data flow analysis engine is 
called dead code elimination; all optimising backends implement 
it. In fact, it's one of the first that ever gets implemented and 
DIP1000 can't do it!

But what if we did have the ability to model truthiness? Well 
then we could cull that dead branch, and not do the report. But 
that is slow right? NOPE! We can actually do this fast!

Here is a modified example from my fast DFA engine, which I've 
been working on for around six months for both truthiness and 
nullability:

```d
void branchKill(bool b)
{
     assert(!b);

     if (b)
     {
         int* ptr;
         int val = *ptr; // no error branch dead
     }
}
```

My hope, should it succeed, is to be on by default; this requires 
it to be fast and not have false positives like the above code.

I hope this is enlightening for those who don't know what data 
flow analysis is all about!


More information about the Digitalmars-d mailing list