Why use a DFA instead of DIP1000?
Dennis
dkorpel at gmail.com
Tue Sep 16 09:59:26 UTC 2025
On Monday, 15 September 2025 at 23:02:25 UTC, Adam Wilson wrote:
> Here is examples of C#'s DFA in action:
>
> (...)
>
> There is no rule that states that DFA must be slow as there
> exist examples disproving the assertion.
When Walter talks about DFA, he refers to the specific algorithm
he learned in the 1980s and implemented in his backend optimizer,
which solves data-flow equations in a loop until it converges on
a fixed point. This algorithm is superlinear, so considered 'too
slow' for the frontend which aims to analyse n statements in O(n)
time.
What Rikki implements is a 4th semantic pass, which analyzes the
code after the existing 3 linear semantic passes have completed.
I don't know exactly how it works, but from his communication I
gather it's a subset of 'full' DFA which trades a bit of accuracy
for compile speed.
The C# examples of unused symbols are, from my brief
experimentation on https://dotnetfiddle.net, even simpler than
that. They don't take control flow into account at all:
```C#
public class Program
{
static int y;
public static void Main()
{
int x = 0;
if (y < 0)
if (y > 0) // this branch is dead code
x++; // but C# doesn't mark this line as unreachable, or x as
unused
}
}
```
That's not to say it's not useful, I think dmd should compute
this information similarly and expose it through an LSP
interface. However, this can be implemented in the existing
linear semantic3 pass with a simple usage counter/boolean per
declaration.
These 3 things have all been labeled 'DFA' so far which makes the
discussion really confusing, so it's best to clarify exactly
which DFA flavor you're talking about.
More information about the Digitalmars-d
mailing list