Why use a DFA instead of DIP1000?
Richard (Rikki) Andrew Cattermole
richard at cattermole.co.nz
Sat Sep 13 15:35:34 UTC 2025
On 13/09/2025 10:13 PM, IchorDev wrote:
> On Saturday, 13 September 2025 at 02:39:40 UTC, Richard (Rikki) Andrew
> Cattermole wrote:
>> This is clearly a false positive; that branch could never run!
>
> It could run if asserts were off. Obviously asserts are never meant to
> fail when the program is in a correct state though.
>
>> One of the hallmarks of a real data flow analysis engine is called
>> dead code elimination; all optimising backends implement it. In fact,
>> it's one of the first that ever gets implemented and DIP1000 can't do it!
>
> This example doesn't seem very realistic. A branch that never runs
> shouldn't really exist to begin with, right? Can you think of any more
> realistic examples where DFA allows you to infer safety where DIP1000 is
> currently limited?
Its not meant to be too realistic, what it is meant to do is have all
the primitives in place to show that the compiler has ability to track
state and what happens should X happen.
Everything from loops, assignments, if statements, to function calls
mess with this, and its not a great example. The logs from stuff like
this get very big. Small font size + multi-gigabyte logs. Not fun and
certainly not educational.
> Also I'm curious what other problems DFA could be applied to? For
> instance, detecting what types a function throws and allowing you to
> catch only those types instead of always having to `catch(Exception)` to
> satisfy the compiler. Or doing more complex type inference.
Exceptions need a set in the compiler rather than just "is nothrow flag
set". The compiler can (and should) do this without DFA.
More information about the Digitalmars-d
mailing list